First photo of a total solar eclipse ever taken, shot in 1851


The first successfully captured photograph of a total solar eclipse, this daguerreotype was shot on July 28, 1851, by Prussian photographer Johann Julius Friedrich Berkowski.

Here’s a little history lesson to help you pass the time between now and the next total solar eclipse on August 21st. The photograph above, a daguerreotype captured almost exactly 166 years ago, is the first successfully-captured photograph of a total solar eclipse.

The photo was captured by master daguerreotypist Johann Julius Friedrich Berkowski, a Prussian photographer who was commissioned by the Royal Prussian Observatory at Königsberg to do what nobody else had managed up until that point: capture an appropriately-exposed photograph of a total solar eclipse.

Up until that point, every photograph taken had been over or under-exposed, and/or didn’t capture sufficient contrast between the bright corona and the obscuring disk of the moon.

According to a paper in the journal Acta Historica Astronomiae, the photograph was captured using a small refracting telescope attached to the hour drive of the 15.8-cm Fraunhofer heliometer. Berkowski began exposing the image shortly after totality, and the final daguerreotype took 84-seconds to capture.

To learn more about this photograph, click here: http://adsabs.harvard.edu/abs/2005AcHA…25..128S

Lizard saliva derivative helps Parkinson’s patients

By Bradley J. Fikes

A diabetes drug developed by a San Diego biotech company from a venomous lizard’s saliva reduces Parkinson’s disease symptoms, according to a study published Thursday.

The placebo-controlled study of 62 patients found the drug, exenatide, provided statistically significant effectiveness in preserving motor control. It may actually slow down disease progression, although this has to be confirmed with more research.

For Parkinson’s patients, the trial represents stronger grounds to expect more effective treatments. For San Diego’s life science community, it represents another example of the benefits of original research and innovation.

The study was published in The Lancet by researchers led by Thomas Foltynie and Dilan Athauda, both of University College London in London, England. While the study wasn’t particularly large, with 62 patients, it was placebo-controlled, and is in line with a previous clinical study published in 2014.

Exenatide was found in Gila monster saliva by Dr. John Eng, an endocrinologist at Bronx Veterans Affairs Medical Center in New York. The venomous lizard, native to the Southwestern United States and northwestern Mexico, delivers excruciating pain with its bite.

San Diego’s Amylin Pharmaceuticals licensed the discovery in 1996. Further development yielded exenatide, sold under the brand name Byetta.

The drug became a hit, providing a major reason for Amylin’s 2012 purchase for $7 billion by Bristol-Myers Squibb. As for Amylin, the company was disbanded and no longer exists.

Exenatide/Byetta reduces insulin resistance in Type 2 diabetes, allowing for better control of blood glucose. There’s evidence that Parkinson’s disease is also related to problems with insulin signaling.

The new clinical study improves on the previous study because it is placebo-controlled, according to an accompanying commentary in The Lancet. But the study has limitations that prevent it from being considered definitive.

“Whether exenatide acts as a novel symptomatic agent or has neuroprotective effects on the underlying Parkinson’s disease pathology remains unclear, but Athauda and colleagues’ study opens up a new therapeutic avenue in treatment of Parkinson’s disease,” the commentary stated.

Christian Weyer, M.D., a former Amylin executive, said one of the most interesting parts of the study was exenatide’s potential for modifying the course of Parkinson’s disease. Weyer is now president of Chula Vista’s ProSciento, a clinical services provider.

Patients were measured on motor skills after getting 48 weeks of injections, either with exenatide or placebo. The treated group showed an advantage of 4 points on a 132-scale test, which was statistically significant.

Exenatide mimics the action of a hormone, and such drugs often show disease-modifying properties, said Weyer, who was Amylin’s Senior Vice President of Research and Development.

“It’s not conclusive that exenatide has the potential for disease-modification, but I was impressed by the fact that the endpoint of the test was in the off-medication period, so you actually assess whether there’s an effect even after the treatment had been stopped,” Weyer said.

Amylin had performed early preclinical research on exenatide for Parkinsons’ disease, Weyer said. The research was funded by a small grant from the Michael J. Fox Foundation.

In chronic diseases such as Type 2 diabetes and Parkinson’s, finding disease-modifying therapies is the “Holy Grail,” Weyer said.

“These are life-long diseases, and anything you can do to either delay or prevent the onset of the disease, or to slow its progression over a long period of time” has great benefit, Weyer said.

Insulin has many biological roles in the body, so it’s not surprising that an abnormal response to insulin could play a role in Parkinson’s disease as well as diabetes, Weyer said.

http://www.sandiegouniontribune.com/business/biotech/sd-me-exenatide-parkinsons-20170803-story.html

The Troxler Effect – why we see monsters in the mirror


by Sidney Stevens

Here’s a fun exercise, though you might not want to do it alone. Stare into a mirror for several minutes and you may notice your face begin to distort. It could even morph into something downright scary.

Seeing monsters in mirrors has long been an effective horror movie device and grist for urban legends. It might sound like a bunch of superstitious hooey, but scientific research now says that seeing altered images in a mirror is a real thing, most likely due in part to a type of optical illusion called “peripheral fading” or the Troxler Effect.

Playing tricks

The discovery that mirror-gazing leads people to see things comes from Giovanni Caputo at the University of Urbino in Italy. In 2010, he published an article in Perceptions describing his otherworldly research findings. Fifty participants were asked to stare into a mirror for 10 minutes in a dimly lit space. Most of them saw far more than they bargained for. Some 66 percent witnessed huge deformations of their face, 18 percent saw an animal such as a pig or cat, 28 percent observed an unknown person and 48 percent beheld monstrous or fantastical beings.

If you were into ghoulish folklore games like Bloody Mary as a kid, these findings may help explain a few things. No one knows exactly how that slumber-party staple got started, but it involves staring into a mirror in a darkened room and saying Bloody Mary’s name repeatedly until she appears as a ghost or witch behind you. The idea is to give participants the fright of their lives.


This early 20th century greeting card shows a young woman staring into a mirror to spy her future husband with a witch watching. Divination rituals like this may have inspired the Bloody Mary legend. (Photo: Unknown/Wikimedia Commons)

Brain illusions

One possible explanation for Caputo’s “strange-face illusion” is the Troxler Effect (or Troxler’s Fading). In 1804, Swiss philosopher and physician Ignaz Paul Vital Troxler noticed that if someone stared at a fixed point for even a short time, peripheral images started fading away.

Experience peripheral fading for yourself by staring at the red dot in the center of the image below. You’ll notice the blue circle around it soon begins to disappear.

Troxler explained peripheral fading as visual neurons adapting to unimportant stimuli. At first you see everything in front of you, but then your eyes begin to ignore what isn’t necessary or within their central focus. This presumably frees up processing power and enhances perceptual efficiency.

Peripheral fading also may be at play in your body’s other neural systems. Think of putting on a watch. At first you’re aware of its weight on your wrist and the cool metal against your skin, but within minutes the sensation has faded. The watch is still there, but you no longer feel it. In the same way you may hear birds singing outside, but as soon as you begin reading your email, the sounds slip out of consciousness.

In the case of the strange-face illusion, as you stare at yourself in a mirror, peripheral areas of your face begin fusing or dissolving like objects in a Salvador Dali painting. Your mouth may stretch to one side, your forehead might meld into your cheeks and your eyebrows may droop to your chin.

Check out more examples of the Troxler Effect in this video:

Something more going on

Peripheral fading seems to explain the shape-shifting that Caputo’s research subjects witnessed in their own faces. But what about those who saw fantastical beings, animals and unknown people? The theory doesn’t fully account for these full-on hallucinations or the sense of otherness some participants felt when staring at them face-to-face. A few even reported beings with malevolent expressions that filled them with fear and anxiety.

Caputo argues that something else may be going on, a form of the “dissociative identity effect.” Perhaps the brain scrambles and then actively fills in new facial features (even really scary ones) in ways that science doesn’t yet understand.

So next time you crave a fear fix, forget hair-raising Hollywood blockbusters and frightful fun house rides. All you need is a mirror and your own brain.

https://www.mnn.com/green-tech/research-innovations/stories/seeing-monsters-mirror-troxler-effect

The World’s First Autonomous Ship Will Set Sail In 2018

By Vanessa Bates Ramirez

A Norwegian container ship called the Yara Birkeland will be the world’s first electric, autonomous, zero-emissions ship.

With a capacity of up to 150 shipping containers, the battery-powered ship will be small compared to modern standards (the biggest container ship in the world holds 19,000 containers, and an average-size ship holds 3,500), but its launch will mark the beginning of a transformation of the global shipping industry. This transformation could heavily impact global trade as well as the environment.

The Yara Birkeland is being jointly developed by two Norwegian companies: agricultural firm Yara International, and agricultural firm, and Kongsberg Gruppen, which builds guidance systems for both civilian and military use.

The ship will be equipped with a GPS and various types of sensors, including lidar, radar, and cameras—much like self-driving cars. The ship will be able to steer itself through the sea, avoid other ships, and independently dock itself.

The Wall Street Journal states that building the ship will cost $25 million, which is about three times the cost of a similarly-sized conventional ship. However, the savings will kick in once the ship starts operating, since it won’t need traditional fuel or a big crew.

Self-driving cars aren’t going to suddenly hit the streets straight off their production line; they’ve been going through multiple types of road tests, refining their sensors, upgrading their software, and generally improving their functionality little by little. Similarly, the Yara Birkeland won’t take to the sea unmanned on its first voyage, nor any of its several first voyages, for that matter.

Rather, the ship’s autonomy will be phased in. At first, says the Journal, “a single container will be used as a manned bridge on board. Then the bridge will be moved to shore and become a remote-operation center. The ship will eventually run fully on its own, under supervision from shore, in 2020.”

Kongsberg CEO Geir Haoy compared the ship’s sea-to-land bridge transition to flying a drone from a command center, saying, “It will be GPS navigation and lots of high-tech cameras to see what’s going on around the ship.”

Interestingly, there’s currently no legislation around autonomous ships (which makes sense since, well, there aren’t any autonomous ships, either). Lawmakers are getting to work, though, and rules will likely be set up by the time the Yara makes it first fully-autonomous trip.

The ship will sail between three ports in southern Norway, delivering Yara International fertilizer from a production facility to a port called Larvik. The planned route is 37 nautical miles, and the ship will stay within 12 nautical miles of the coast.

The United Nations’ International Maritime Organization estimates over 90 percent of the world’s trade is carried by sea, and states that maritime transport is “By far the most cost-effective way to move en masse goods and raw materials around the world.”

But ships are also to blame for a huge amount of pollution; one study showed that just 15 of the world’s biggest ships may emit as much pollution as all the world’s cars, largely due to the much higher sulfur content of ship fuel. Oddly, shipping emission regulations weren’t included in the Paris Agreement.

Besides reducing fuel emissions by being electric, the Yara Birkeland will supposedly replace 40,000 truck drives a year through southern Norway. Once regulations are in place and the technology has been tested and improved, companies will start to build larger ships that can sail longer routes.

The World’s First Autonomous Ship Will Set Sail In 2018

A genetic analysis finds that the Canaanites survived a divine call for their extinction and that their descendants live in Lebanon

There is a story in the Hebrew Bible that tells of God’s call for the annihilation of the Canaanites, a people who lived in what are now Jordan, Lebanon, Syria, Israel and the Palestinian territories thousands of years ago.

“You shall not leave alive anything that breathes,” God said in the passage. “But you shall utterly destroy them.”

But a genetic analysis published on Thursday has found that the ancient population survived that divine call for their extinction, and their descendants live in modern Lebanon.

“We can see the present-day Lebanese can trace most of their ancestry to the Canaanites or a genetically equivalent population,” said Chris Tyler-Smith, a geneticist with the Wellcome Trust Sanger Institute who is an author of the paper. “They derive just over 90 percent of their ancestry from the Canaanites.”

Dr. Tyler-Smith and an international team of geneticists and archaeologists recovered ancient DNA from bones belonging to five Canaanites retrieved from an excavation site in Sidon, Lebanon, that were 3,650 to 3,750 years old. The team then compared the ancient DNA with the genomes of 99 living people from Lebanon that the group had sequenced. It found that the modern Lebanese people shared about 93 percent of their ancestry with the Bronze Age Sidon samples.

The team published its results in The American Journal of Human Genetics.

“The conclusion is clear,” said Iosif Lazaridis, a geneticist at Harvard who was not involved in the study. “Based on this study it turns out that people who lived in Lebanon almost 4,000 years ago were quite similar to people who lived there today, to the modern Lebanese.”

Marc Haber, a postdoctoral fellow at the Wellcome Trust Sanger Institute in England and lead author on the study, said that compared with other Bronze Age civilizations, not much is known about the Canaanites.

“We know about ancient Egyptians and ancient Greeks, but we know very little about the ancient Canaanites because their records didn’t survive,” he said. Their writings may have been kept on papyrus, which did not stand the test of time as clay did. What is known about the Canaanites is that they lived and traded along the eastern coast of the present-day Mediterranean, a region that was known as the Levant.

“What we see is that since the Bronze Age, this ancestry, or the genetics of the people there, didn’t change much,” Dr. Haber said. “It changed a little, but it didn’t change much and that is what surprised me.”

At first the team was not sure if it would be able to retrieve DNA from the ancient skeletons, which were recovered from the hot and humid excavation site within the last 19 years. Dr. Haber had chosen more than two dozen bones from the site that looked promising and had them investigated for genetic material. It turned out that only five contained ancient DNA. All of those came from the petrous part of the temporal bone, which is the tough part of the skull behind the ear, from five different individuals.

After extracting that DNA, the team members compared it with a database that contained genetic information from hundreds of human populations. They then further compared their results with the genomes of the modern-day Lebanese population sample, which revealed what happened to the ancient Canaanite population.

“Genetics has the power to answer questions that historical records or archaeology are not able to answer,” Dr. Haber said.

He said researchers thought that migrations, conquests and the intermixing of Eurasian people — like the Assyrians, Persians or Macedonians — with the Canaanites 3,800 to 2,200 years ago might have contributed to the slight genetic changes seen in modern Lebanese populations. Still, the Lebanese retain most of their ancestral DNA from the Canaanites.

“It confirms the continuity of occupation and rooted tradition we have seen on-site, which was occupied from the 4th millennium B.C. right to the Crusader period,” Claude Doumet-Serhal, an archaeologist and director of the Sidon Excavation who is a co-author on the paper, said in an email.

She said that the archaeologists had found about 160 burials to date at their excavation site, which is in the heart of modern Sidon. They include graves and burials where a person was placed in a large jar, and they date to between 1900 and 1550 B.C. The genetic results further support the archaeological findings.

“We were delighted by the findings,” Dr. Doumet-Serhal said. “We are looking at the Canaanite society through 160 burials and at the same time uncovering a common past for all the people of Lebanon, whatever religion they belong to.”

The Biggest Facial Recognition System in the World Is Rolling Out in China

By Kayla Matthews

Facial recognition is set to have a significant impact on our society as a whole.

While many consumers are familiar with the concept because of the many smartphone apps that let them add various filters, graphics and effects to their pictures, the technology behind facial recognition isn’t limited to playful, mainstream applications.

Law enforcement is using next-gen software to identify and catch some of their most wanted criminals. But government officials in China are taking the technology even further by installing a nationwide system of facial recognition infrastructure—and it’s already generating plenty of controversy on account of its massive scale.

The Usefulness of Facial Recognition

Many applications of facial recognition are legitimate. China and many other countries use basic systems to monitor ATMs and restrict public access to government-run or other sensitive facilities. Some restaurants are even using the technology to provide food recommendations based on the perceived age and gender of the user.

Facial recognition is also useful in security. At least one prominent tourist attraction is using the technology to thwart would-be thieves. Similar systems have been installed at the doors of a women’s dormitory at Beijing Normal University to prevent unauthorized entry.

While it’s impossible to say how much crime the new system prevents, other female dorms are already considering the hardware for their own use. Applications like this have a definite benefit to the entire nation.

Chinese officials are already praising facial recognition as the key to the 21st-century smart city. They’ve recently pioneered a Social Credit System that aims to give every single citizen a rating. Meant to assist in determining an individual’s trustworthiness or financial status, the success of their program has been spurred on by current facial recognition software and hardware.

Officials aim to enroll every Chinese citizen into a nationwide database by 2020, and they’re already well on their way to doing so.

The Controversial Side

Advanced technology such as this rarely exists without controversy. Pedestrians in southern China recently expressed outrage when their information was broadcast publicly. While supporters of facial recognition systems will insist that law-abiding citizens aren’t at risk of this kind of public exposure, hackers could, in theory, take control of these systems and use them for their own nefarious purposes.

With some 600 million closed-circuit television (CCTV) systems already in place throughout the nation, the odds of a serious break-in or cyber attack are astronomical.

There have already been countless reports of Chinese hackers gaining unauthorized access to consumer webcams across the country, and some experts believe the same technology could be used to hack the nation’s CCTV network. Given the sheer amount of systems and the potential for massive disruptions to public infrastructure, it seems like it’s only a matter of time.

There’s also the issue of global privacy. Although China has always been very security-conscious, their massive surveillance system is already raising questions of morality, civil liberty and confidentiality. If the government begins targeting peaceful demonstrators who are attending lawful protests, for instance, there could be some serious repercussions.

A Full-Scale Model for the Modern Smart City

In 2015, the Chinese Ministry of Public Security announced their intentions for an “omnipresent, completely connected, always on and fully controllable” network of facial recognition systems and CCTV hardware.

While this will certainly benefit the Chinese population in many ways, including greater security throughout the country, it will undoubtedly rub some people the wrong way.

In either case, other government entities will be watching this closely and learning from their mistakes.

The Biggest Facial Recognition System in the World Is Rolling Out in China

HOW DID I END UP HERE? – reflections of an amazing young woman scientist


I’m a Computer Science major in the school of Literature, Science, and the Arts. Eleven months ago, I knew nothing about nuclear engineering. In ten days, I’ll be interning at Sandia National Laboratories in Livermore, CA to work on a Helium-3 well counter. You might be wondering how I ended up here- I know that I am.

By Aditi Rajadhyaksha

I was pretty lost at the beginning of the Fall 2016 semester. I had just decided to major in Computer Science, which I was satisfied with. I enjoyed my Computer Science courses, but I felt out of place. I had just been accepted to the UROP program, which I was apprehensive about because I had heard many stories of friends who ended up with terrible UROP projects and even worse mentors. Boy, I had no idea what I was in for.

Fast forward a month and I’m on Professor Sara Pozzi’s project working with Dr. Patricia Schuster. I don’t know a single thing about nuclear engineering, but the field has always intrigued me, so I’m excited about the prospect of this project. Patricia has explained the project to me, and I sort of understand it, but not really. Luckily, I understand the first coding assignment, so I get to work on that


March 24, 2017. Our first measurement

As the months go by, I start understanding more and more of the project and get engaged in the work. I also start to get an idea of the magnitude of the importance of the field of nuclear engineering.

Rewind a week, then a month, then two months… research is hard. Things take time. Things always take longer than you think they will. Things never work the first time, and, if they do, something’s wrong. Some group members start getting frustrated with the pace of progress. Stress starts building. We were supposed to have stilbene data by the ANS Student Conference in April, but that might not happen.

As the months go by, I start understanding more and more of the project and get engaged in the work. I also start to get an idea of the magnitude of the importance of the field of nuclear engineering.

Rewind a week, then a month, then two months… research is hard. Things take time. Things always take longer than you think they will. Things never work the first time, and, if they do, something’s wrong. Some group members start getting frustrated with the pace of progress. Stress starts building. We were supposed to have stilbene data by the ANS Student Conference in April, but that might not happen.

At the student conference, I start to realize that I really enjoy this work. I’m beginning to think that this is something I could see myself doing for a long time. I mean, I submitted my data to a Science as Art contest held by the university. So, not only do I enjoy the work, but I think it’s art, too.

People respond really well to my ANS student conference talk. This tells me that this work is something that I’m good at. I want to start actively pursuing this.

At the conference, two CVT/DNNG upperclassmen, Evan and Kyle, tell me that they will be working at national laboratories this summer. I have heard about these national laboratories- they do amazing work, the best people in nuclear have worked at them, and Patricia did her graduate work at Sandia National Laboratories. Now I want to work at one really badly. My sights are set on it for my summer 2018 internship. Also, I feel like I’ve been robbed of an award for my talk at the conference. Vengeance starts brewing.

In April, I find out that I’ve been accepted as a summer CVT undergraduate fellow. I am ecstatic! I will get to continue working on this project that I have gotten really into. And I will get to continue working with Patricia. And I win an award for the best poster in my session at the UROP symposium. More evidence that I am good at what I am doing and that my passion for it is being conveyed to others.

λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ

I started the summer with three goals:
1.Take measurements of the stilbene organic scintillator detector.
2.Give an award-winning talk at the INMM conference.
3.Learn how to drive (this is something that they do not teach you when you grow up in New York City).

My CVT internship began with the MCNP-PoliMi workshop. That was exciting for me because I had never been exposed to MCNP prior to this and had heard a lot about it. I was able to become familiar with the program and meet people that had traveled far distances to Ann Arbor for this workshop.

Throughout the internship, I continued working on my temperature dependence of organic scintillator detectors project. We took measurements of the stilbene organic scintillator detector and a plastic organic scintillator detector along with the EJ-309 liquid organic scintillator detector again. I improved upon and wrote new data analysis scripts in Python.

Then, it was time for the INMM conference! It was an amazing opportunity and an amazing conference. I met a lot of people, exchanged several business cards, and heard a lot of interesting talks. I especially enjoyed hearing some of the policy talks because that’s something I don’t normally get to hear about.

The best was hearing Sieg Hecker’s talk. It was an honor. Hearing about his experiences after the Cold War and hearing him talk about the current state of nuclear policy and weapons showed me the importance of this work. I left inspired.

I spent a lot of time at the national lab exhibits because I have a one-track mind, and that mind was set on getting contacts at national labs in order to secure myself an internship for summer 2018. I also met with some companies that are also doing impressive and groundbreaking work, so those are other places to pursue for next summer.

I presented my work. It was a great experience because I got to hear suggestions and answer questions from audience members and present my work to some CVT and DNNG members for the first time. I was also the only undergraduate presenter from the DNNG and in my session, which I’m proud of.

I met my judge during a social event. I asked him what I did well and how to improve. He told me that I had the best slides by far and that I had the best presentation material. He said that I held myself well on stage and had great poise and public speaking skills. However, he was disappointed that I read from my notes when presenting. He said that he knew that I knew the information and did not need those notes.

This information is invaluable. For the future, I know what I do well, and I know where to improve. Hopefully, next year when I implement the judge’s advice I can win an award!

But best of all, Patricia scored me an internship at Sandia National Laboratories starting in August. She introduced me to Dr. Scott Kiff, who, after hearing about the work I did during the year and during my CVT internship, offered me a position to work on his well counter project for five weeks. As you know, this is a dream come true for me. I get to work on a new project at a national lab, learn about well counters, and get a taste for what it would be like to work at Sandia at Livermore. I will get to meet new people there and network and hopefully earn a position there next summer. I also want to obtain some of the new organic glass scintillator material that was just developed there to characterize its temperature dependence. And the best part is that Professor Pozzi agreed to fund me. It really can’t get any better than this.

λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ

I was lost at the beginning of this year. And I really didn’t know what I was getting into when I sent that email to Patricia asking to hear about her UROP project. Little did I know it would be a life-changing experience in which I would find my place and my passion.

For so long I’ve had such a hard time finding something that I really enjoy in a place where I feel like I really belong. I spent countless hours trying to come up with a passion for myself when writing my college essays, when selecting my freshman year classes, when pondering potential careers. Today I realize that I have finally found it. I’ve found something that I do well and that I love doing and that I feel passionate about with a group of some of the nicest and smartest people that I have ever met.

I watched my friends take jobs in marketing and finance and at big commercial tech companies and struggled to do the same. Those jobs didn’t appeal to me because they didn’t help anything or anybody. In fact, many of those jobs seem to make life harder for people. I guess it was the altruist that my parents and family raised in me that told me that I didn’t belong in one of those professions doing that work.

But this field is different. As I learned from being at the INMM Conference and listening to Sieg Hecker’s talk, this work matters so much to everybody. The work that I do can help countless people, and has the potential to even save the planet. This makes me feel like I’m doing really important work, which in turn pushes me to work harder and produce better and better work every day.

λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ

I started the summer with three goals:
1.Take measurements of the stilbene organic scintillator detector.
2.Give an award-winning talk at the INMM conference.
3.Learn how to drive.

I took measurements of the stilbene organic scintillator detector, and I learned how to drive. I am still working towards winning an award, and I am coming closer every day.

I am very excited to start this new project at Sandia. This will help me work towards my goals of excelling in this field and contributing to the safety of the United States and the world.

At the end of this internship, I know two things for sure: 1. I am the luckiest person in the world. 2. That well counter has no idea what’s coming for it.

λλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλλ

Thank you to Patricia Schuster, who has done so much for me. From teaching me everything I know about nuclear engineering, to pushing me to give talks and be a better researcher, to advocating for me to the DNNG and CVT, to getting me the CVT internship, to getting me the position at Sandia, and for being there for me when it gets hard to be a woman in engineering. She has been instrumental in helping me find my passion and my purpose and in making me the researcher, nuclear engineer and data scientist that I am today.

https://itsinterestingdotcom.wordpress.com/wp-admin/post-new.php

Dead man incorrectly identified and buried by the wrong family

Ever since her son disappeared almost 30 years ago, led someplace by his mental illness, Karen Bilyeu waited for him to call. She came up with theories: Maybe he witnessed a crime and was now hiding in a witness protection program. Maybe he was dead. His name was John Dean Dickens, and he was stocky and blue-eyed, with a baritone voice.

“One day, you want to think he’s alive,” the 72-year-old Cherryvale, Kansas, woman said. “The next day you don’t believe yourself.” But she remained hopeful and asked a retired police officer friend to try to find her boy.

Then, last month, Bilyeu found him.

The 54-year-old Dickens — known as J.D. — had died in May and been buried in a California grave, after the Orange County Sheriff coroner’s office mixed him up with another homeless man, Francis M. Kerrigan, who was alive. Local media covered the story, and it went viral.

At that moment, the lives of two families from California and Kansas became intertwined. They both loved an estranged, mentally ill, homeless family member, and tried to keep them close, but couldn’t. They both worried whether the men were cold or hungry or dead.

Bilyeu said she didn’t know her son was homeless until she learned of his death from the Orange County coroner’s office.

“At least he’s not suffering … not going hungry,” Bilyeu said.

Orange County officials are investigating how the mix-up happened. They plan to exhume Dickens’ body and cremate him at his family’s request.

J.D. was good at math, his younger sister, Diane Keaton, said. He would often help her with her homework, but they still bickered like siblings, she recalled, particularly when Dickens blasted Jackson Browne’s “Running on Empty.”

“That’s one reason he might have survived so long on his own: He had street smarts, he had the capability of thought,” said Keaton, 52, of Parsons, Kansas.

At 16, Dickens started to disappear and use drugs, his mother said. Shortly after, he was diagnosed with schizophrenia.

When he turned 18, he would leave home for months at a time but always popped back up, his family said.

“I always got frequent phone calls from him to let me know he was OK and to check on my well-being,” Bilyeu said.

J.D. began to settle down — albeit briefly — after a stint in the US Army in the 1980s, his family said.

In the late 1980s, he briefly stayed with Keaton, who was married, pregnant with her third child and living in Arkansas. But Keaton and her husband were struggling financially and their electricity was turned off, she said. They couldn’t afford to support their family and her brother.

So, J.D. left on good terms. He promised to let her know when he got settled.

“It wasn’t a big deal for him because he was used to getting up and going,” Keaton said.

J.D. made his way to Phoenix, and Bilyeu recalled having a cryptic conversation with him while he was there.

He said his car was stolen, and he knew who did it. But if he tried to recover the car, there would be trouble.

“We discussed it and we agreed, maybe the best thing was to leave it, get out the atlas and go down the road a little way,” Bilyeu said.

Again, he promised to call as soon as he got settled. That was the last time they talked.

“It’s heartbreaking, and it’s just over and over and over because you get your hopes up,” Bilyeu said. “Maybe he’ll call this birthday, and you hear nothing.”

“It’s the worst thing that could ever happen to a parent,” she said, “not knowing if (your child) is OK, if they’re hungry.”

‘My heart breaks for them’

About 15 years ago, Kerrigan was diagnosed with schizophrenia and bipolar disorder, said his sister, Carole Meikel, 56, of Silverado, California. She said the challenges her family and Dickens’ family faced were identical. “My heart breaks for them,” she said.

Kerrigan’s life started to unravel a few years after the diagnosis, when their mother died about a decade ago. That’s when he became homeless, Meikel said.

Meikel said her family tried to get her brother, known as Frankie, into housing, but he wanted to stay on the streets. Mostly, she said, he was good about keeping in touch, but she still feared that something would happen to him.

Mistaken identity

In May, the Kerrigan family got a call from the Orange County coroner’s office with startling news: Frankie was dead, they said.

Officials told 82-year-old Francis J. Kerrigan that they had identified his son through his fingerprints, and that they didn’t need the elder Kerrigan to identify the body, members of the Kerrigan family said.

Speaking through his lawyer, the elder Kerrigan told CNN he believed his son was dead at that point — “no question about it.” But a Kerrigan family attorney said officials had actually identified the body found outside the cellular store in Fountain Valley using an old Department of Motor Vehicles identification.

An autopsy said that man died of an enlarged heart and fluid in his lungs, KABC reported.

The Kerrigans saw the body days before a funeral on May 12, and it was tough to recognize, the station reported.

But on May 23, Frankie called his father from the home of a family friend, who had served as a pallbearer at the funeral. He was alive. Days later, the family attorney notified the coroner’s office of the mistake.

The attorney has filed notices of claims, a prelude to a lawsuit, against Orange County on behalf of the elder Kerrigan and Meikel, seeking a little more than $2 million. The court papers allege the younger Kerrigan’s civil rights were violated and the family suffered emotional distress.

‘That’s J.D. — I know it’

On May 30, Orange County officials correctly identified the body using fingerprints. About a month later, Orange County officials reached J.D.’s stepsister in Illinois, who passed a message to Bilyeau, Keaton said.

An official later told Bilyeu of her son’s death in Fountain Valley, but not about the cause of death, Keaton said.

“What upsets me and Mom … is the media knew what he died from, the (Kerrigan) family knew what he died from and the attorneys knew,” she said. “She should have told us.” Keaton said Orange County officials also didn’t mention the mix-up to her mother.

Keaton, who had seen news reports about the burial mix-up, suspected her brother may have been the misidentified body. She also noticed that a form to consent to his cremation said he died in Fountain Valley and was homeless, like the man in the news reports.

Her brother’s physical description also matched the description of the unidentified man.

She called her mother around midnight with the news. “That’s J.D. — I know it,” she said.

A Kerrigan family attorney, who knew the identity of the misidentified body, later confirmed it was her brother, Keaton said.

Soon, Bilyeu will get her son’s ashes and she may spread them at a family plot. Or she may hold onto them, and she’ll leave instructions to bury the two urns together when she’s cremated.

“He and I have always been so close,” she said.

Lives of families intertwined in homeless men’s burial mix-up

Son programs Chatbot to try to give his father cyber-immortality

by JAMES VLAHOS

The first voice you hear on the recording is mine. “Here we are,” I say. My tone is cheerful, but a catch in my throat betrays how nervous I am.

Then, a little grandly, I pronounce my father’s name: “John James Vlahos.”

“Esquire,” a second voice on the recording chimes in, and this one word—delivered as a winking parody of lawyerly pomposity—immediately puts me more at ease. The speaker is my dad. We are sitting across from each other in my parents’ bedroom, him in a rose-colored armchair and me in a desk chair. It’s the same room where, decades ago, he calmly forgave me after I confessed that I’d driven the family station wagon through a garage door. Now it’s May 2016, he is 80 years old, and I am holding a digital audio recorder.

Sensing that I don’t quite know how to proceed, my dad hands me a piece of notepaper marked with a skeletal outline in his handwriting. It consists of just a few broad headings: “Family History.” “Family.” “Education.” “Career.” “Extracurricular.”

“So … do you want to take one of these cat­egories and dive into it?” I ask.

“I want to dive in,” he says confidently. “Well, in the first place, my mother was born in the village of Kehries—K-e-h-r-i-e-s—on the Greek island of Evia …” With that, the session is under way.

We are sitting here, doing this, because my father has recently been diagnosed with stage IV lung cancer. The disease has metastasized widely throughout his body, including his bones, liver, and brain. It is going to kill him, probably in a matter of months.
So now my father is telling the story of his life. This will be the first of more than a dozen sessions, each lasting an hour or more. As my audio recorder runs, he describes how he used to explore caves when he was growing up; how he took a job during college loading ice blocks into railroad boxcars. How he fell in love with my mother, became a sports announcer, a singer, and a successful lawyer. He tells jokes I’ve heard a hundred times and fills in biographical details that are entirely new to me.

Three months later, my younger brother, Jonathan, joins us for the final session. On a warm, clear afternoon in the Berkeley hills, we sit outside on the patio. My brother entertains us with his favorite memories of my dad’s quirks. But as we finish up, Jonathan’s voice falters. “I will always look up to you tremendously,” he says, his eyes welling up. “You are always going to be with me.” My dad, whose sense of humor has survived a summer of intensive cancer treatments, looks touched but can’t resist letting some of the air out of the moment. “Thank you for your thoughts, some of which are overblown,” he says. We laugh, and then I hit the stop button.

In all, I have recorded 91,970 words. When I have the recordings professionally transcribed, they will fill 203 single-spaced pages with 12-point Palatino type. I will clip the pages into a thick black binder and put the volume on a bookshelf next to other thick black binders full of notes from other projects.

But by the time I put that tome on the shelf, my ambitions have already moved beyond it. A bigger plan has been taking shape in my head. I think I have found a better way to keep my father alive.

It’s 1982, and I’m 11 years old, sitting at a Commodore PET computer terminal in the atrium of a science museum near my house. Whenever I come here, I beeline for this machine. The computer is set up to run a program called Eliza—an early chatbot created by MIT computer scientist Joseph Weizenbaum in the mid-1960s. Designed to mimic a psycho­therapist, the bot is surprisingly mesmerizing.

What I don’t know, sitting there glued to the screen, is that Weizenbaum himself took a dim view of his creation. He regarded Eliza as little more than a parlor trick (she is one of those therapists who mainly just echoes your own thoughts back to you), and he was appalled by how easily people were taken in by the illusion of sentience. “What I had not realized,” he wrote, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

At age 11, I am one of those people. Eliza astounds me with responses that seem genuinely perceptive (“Why do you feel sad?”) and entertains me with replies that obviously aren’t (“Do you enjoy feeling sad?”). Behind that glowing green screen, a fledgling being is alive. I’m hooked.

A few years later, after taking some classes in Basic, I try my hand at crafting my own conversationally capable computer program, which I ambitiously call The Dark Mansion. Imitating classic text-only adventure games like Zork, which allow players to control an unfolding narrative with short typed commands, my creation balloons to hundreds of lines and actually works. But the game only lasts until a player navigates to the front door of the mansion—less than a minute of play.

Decades go by, and I prove better suited to journalism than programming. But I am still interested in computers that can talk. In 2015 I write a long article for The New York Times Magazine about Hello Barbie, a chatty, artificially intelligent update of the world’s most famous doll. In some ways, this new Barbie is like Eliza: She “speaks” via a prewritten branching script, and she “listens” via a program of pattern-­matching and natural-­language processing. But where Eliza’s script was written by a single dour German computer scientist, Barbie’s script has been concocted by a whole team of people from Mattel and PullString, a computer conversation company founded by alums of Pixar. And where Eliza’s natural-­language processing abilities were crude at best, Barbie’s powers rest on vast recent advances in machine learning, voice recognition, and processing power. Plus Barbie—like Amazon’s Alexa, Apple’s Siri, and other products in the “conversational computing” boom—can actually speak out loud in a voice that sounds human.

I keep in touch with the PullString crew afterward as they move on to creating other characters (for instance, a Call of Duty bot that, on its first day in the wild, has 6 million conversations). At one point the company’s CEO, Oren Jacob, a former chief technology officer at Pixar, tells me that PullString’s ambitions are not limited to entertainment. “I want to create technology that allows people to have conversations with characters who don’t exist in the physical world—because they’re fictional, like Buzz Lightyear,” he says, “or because they’re dead, like Martin Luther King.”

My father receives his cancer diagnosis on April 24, 2016. A few days later, by happenstance, I find out that PullString is planning to publicly release its software for creating conversational agents. Soon anybody will be able to access the same tool that PullString has used to create its talking characters.

The idea pops into my mind almost immediately. For weeks, amid my dad’s barrage of doctor’s appointments, medical tests, and treatments, I keep the notion to myself.

I dream of creating a Dadbot—a chatbot that emulates not a children’s toy but the very real man who is my father. And I have already begun gathering the raw material: those 91,970 words that are destined for my bookshelf.

The thought feels impossible to ignore, even as it grows beyond what is plausible or even advisable. Right around this time I come across an article online, which, if I were more superstitious, would strike me as a coded message from forces unseen. The article is about a curious project conducted by two researchers at Google. The researchers feed 26 million lines of movie dialog into a neural network and then build a chatbot that can draw from that corpus of human speech using probabilistic machine logic. The researchers then test the bot with a bunch of big philosophical questions.

“What is the purpose of living?” they ask one day.

The chatbot’s answer hits me as if it were a personal challenge.

“To live forever,” it says.

“Sorry,” my mom says for at least the third time. “Can you explain what a chatbot is?” We are sitting next to each other on a couch in my parents’ house. My dad, across the room in a recliner, looks tired, as he increasingly does these days. It is August now, and I have decided it is time to tell them about my thoughts.

As I have contemplated what it would mean to build a Dadbot (the name is too cute given the circumstances, but it has stuck in my head), I have sketched out a list of pros and cons. The cons are piling up. Creating a Dadbot precisely when my actual dad is dying could be agonizing, especially as he gets even sicker than he is now. Also, as a journalist, I know that I might end up writing an article like, well, this one, and that makes me feel conflicted and guilty. Most of all, I worry that the Dadbot will simply fail in a way that cheapens our relationship and my memories. The bot may be just good enough to remind my family of the man it emulates—but so far off from the real John Vlahos that it gives them the creeps. The road I am contemplating may lead straight to the uncanny valley.

So I am anxious to explain the idea to my parents. The purpose of the Dadbot, I tell them, would simply be to share my father’s life story in a dynamic way. Given the limits of current technology and my own inexperience as a programmer, the bot will never be more than a shadow of my real dad. That said, I would want the bot to communicate in his distinctive manner and convey at least some sense of his personality. “What do you think?” I ask.

My dad gives his approval, though in a vague, detached way. He has always been a preternaturally upbeat, even jolly guy, but his terminal diagnosis is nudging him toward nihilism. His reaction to my idea is probably similar to what it would be if I told him I was going to feed the dog—or that an asteroid was bearing down upon civilization. He just shrugs and says, “OK.”

The responses of other people in my family—those of us who will survive him—are more enthusiastic. My mom, once she has wrapped her mind around the concept, says she likes the idea. My siblings too. “Maybe I am missing something here,” my sister, Jennifer, says. “Why would this be a problem?” My brother grasps my qualms but doesn’t see them as deal breakers. What I am proposing to do is definitely weird, he says, but that doesn’t make it bad. “I can imagine wanting to use the Dadbot,” he says.

That clinches it. If even a hint of a digital afterlife is possible, then of course the person I want to make immortal is my father.

This is my dad: John James Vlahos, born January 4, 1936. Raised by Greek immigrants, Dimitrios and Eleni Vlahos, in Tracy, California, and later in Oakland. Phi Beta Kappa graduate (economics) from UC Berkeley; sports editor of The Daily Californian. Managing partner of a major law firm in San Francisco. Long-­suffering Cal sports fan. As an announcer in the press box at Berkeley’s Memorial Stadium, he attended all but seven home football games between 1948 and 2015. A Gilbert and Sullivan fanatic, he has starred in shows like H.M.S. Pinafore and was president of the Lamplighters, a light-opera theater company, for 35 years. My dad is interested in everything from languages (fluent in English and Greek, decent in Spanish and Italian) to architecture (volunteer tour guide in San Francisco). He’s a grammar nerd. Joke teller. Selfless husband and father.

These are the broad outlines of the life I hope to codify inside a digital agent that will talk, listen, and remember. But first I have to get the thing to say anything at all. In August 2016, I sit down at my computer and fire up PullString for the first time.

To make the amount of labor feasible, I have decided that, at least initially, the Dadbot will converse with users via text messages only. Not sure where to begin programming, I type, “How the hell are you?” for the Dadbot to say. The line appears onscreen in what looks like the beginning of a giant, hyper-­organized to-do list and is identified by a yellow speech bubble icon.

Now, having lobbed a greeting out into the world, it’s time for the Dadbot to listen. This requires me to predict possible responses a user might type, and I key in a dozen obvious choices—fine, OK, bad, and so on. Each of these is called a rule and is tagged with a green speech bubble. Under each rule, I then script an appropriate follow-up response; for example, if a user says, “great,” I tell the bot to say, “I’m glad to hear that.” Lastly, I create a fallback, a response for every input that I haven’t predicted—e.g., “I’m feeling off-kilter today.” The PullString manual advises that after fallbacks, the bot response should be safely generic, and I opt for “So it goes.”

With that, I have programmed my very first conversational exchange, accounting for multiple contingencies within the very narrow context of saying hello.

And voilà, a bot is born.

Granted, it is what Lauren Kunze, CEO of Pandora­bots, would call a “crapbot.” As with my Dark Mansion game back in the day, I’ve just gotten to the front door, and the path ahead of me is dizzying. Bots get good when their code splits apart like the forks of a giant maze, with user inputs triggering bot responses, each leading to a fresh slate of user inputs, and so on until the program has thousands of lines. Navigational commands ping-pong the user around the conversational structure as it becomes increasingly byzantine. The snippets of speech that you anticipate a user might say—the rules—can be written elaborately, drawing on deep banks of phrases and synonyms governed by Boolean logic. Rules can then be combined to form reusable meta-rules, called intents, to interpret more complex user utterances. These intents can even be generated automatically, using the powerful machine-­learning engines offered by Google, Facebook, and PullString itself. Beyond that, I also have the option of allowing the Dadbot to converse with my family out loud, via Alexa (though unnervingly, his responses would come out in her voice).

It will take months to learn all of these complexities. But my flimsy “How are you” sequence has nonetheless taught me how to create the first atoms of a conversational universe.

After a couple of weeks getting comfortable with the software, I pull out a piece of paper to sketch an architecture for the Dadbot. I decide that after a little small talk to start a chat session, the user will get to choose a part of my dad’s life to discuss. To denote this, I write “Conversation Hub” in the center of the page. Next, I draw spokes radiating to the various chapters of my Dad’s life—Greece, Tracy, Oakland, College, Career, etc. I add Tutorial, where first-time users will get tips on how best to communicate with the Dadbot; Songs and Jokes; and something I call Content Farm, for stock segments of conversations that will be referenced from throughout the project.

To fill these empty buckets, I mine the oral history binder, which entails spending untold hours steeped in my dad’s words. The source material is even richer than I’d realized. Back in the spring, when my dad and I did our interviews, he was undergoing his first form of cancer treatment: whole-brain radiation. This amounted to getting his head microwaved every couple of weeks, and the oncologist warned that the treatments might damage his cognition and memory. I see no evidence of that now as I look through the transcripts, which showcase my dad’s formidable recall of details both important and mundane. I read passages in which he discusses the context of a Gertrude Stein quote, how to say “instrumentality” in Portuguese, and the finer points of Ottoman-era governance in Greece. I see the names of his pet rabbit, the bookkeeper in his father’s grocery store, and his college logic professor. I hear him recount exactly how many times Cal has been to the Rose Bowl and which Tchaikovsky piano concerto his sister played at a high school recital. I hear him sing “Me and My Shadow,” which he last performed for a high school drama club audition circa 1950.

All of this material will help me to build a robust, knowledgeable Dadbot. But I don’t want it to only represent who my father is. The bot should showcase how he is as well. It should portray his manner (warm and self-effacing), outlook (mostly positive with bouts of gloominess), and personality (erudite, logical, and above all, humorous).

The Dadbot will no doubt be a paltry, low-­resolution representation of the flesh-and-blood man. But what the bot can reasonably be taught to do is mimic how my dad talks—and how my dad talks is perhaps the most charming and idiosyncratic thing about him. My dad loves words—wry, multisyllabic ones that make him sound like he is speaking from the pages of a P. G. Wodehouse novel. He employs antiquated insults (“Poltroon!”) and coins his own (“He flames from every orifice”). My father has catchphrases. If you say something boastful, he might sarcastically reply, “Well, hot dribbling spit.” A scorching summer day is “hotter than a four-dollar fart.” He prefaces banal remarks with the faux-pretentious lead-in “In the words of the Greek poet …” His penchant for Gilbert and Sullivan quotes (“I see no objection to stoutness, in moderation”) has alternately delighted and exasperated me for decades.

Using the binder, I can stock my dad’s digital brain with his actual words. But personality is also revealed by what a person chooses not to say. I am reminded of this when I watch how my dad handles visitors. After whole-brain radiation, he receives aggressive chemotherapy throughout the summer. The treatments leave him so exhausted that he typically sleeps 16 or more hours a day. But when old friends propose to visit during what should be nap time, my dad never objects. “I don’t want to be rude,” he tells me. This tendency toward stoic self-denial presents a programming challenge. How can a chatbot, which exists to gab, capture what goes unsaid?

Weeks of work on the Dadbot blend into months. The topic modules—e.g., College—swell with nested folders of subtopics, like Classes, Girlfriends, and The Daily Cal. To stave off the bot vice of repetitiousness, I script hundreds of variants for recurring conversational building blocks like Yes and What would you like to talk about? and Interesting. I install a backbone of life facts: where my dad lives, the names of his grandchildren, and the year his mother died. I encode his opinions about beets (“truly vomitous”) and his description of UCLA’s school colors (“baby-shit blue and yellow.”)

When PullString adds a feature that allows audiofiles to be sent in a messaging thread, I start sprinkling in clips of my father’s actual voice. This enables the Dadbot to do things like launch into a story he made up when my siblings and I were small—that of Grimo Gremeezi, a little boy who hated baths so much that he was accidentally hauled off to the dump. In other audio segments, the bot sings Cal spirit songs—the profane “The Cardinals Be Damned” is a personal favorite—and excerpts from my dad’s Gilbert and Sullivan roles.

Veracity concerns me. I scrutinize lines that I have scripted for the bot to say, such as “Can you guess which game I am thinking of?” My father is just the sort of grammar zealot who would never end a sentence with a preposition, so I change that line to “Can you guess which game I have in my mind?” I also attempt to encode at least a superficial degree of warmth and empathy. The Dadbot learns how to respond differently to people depending on whether they say they feel good or bad—or glorious, exhilarated, crazed, depleted, nauseous, or concerned.

I try to install spontaneity. Rather than wait for the user to make all of the conversational choices, the Dadbot often takes the lead. He can say things like “Not that you asked, but here is a little anecdote that just occurred to me.” I also give the bot a skeletal sense of time. At midday, for instance, it might say, “I am always happy to talk, but shouldn’t you be eating lunch around now?” Now that temporal awareness is part of the bot’s programming, I realize that I need to code for the inevitable. When I teach the bot holidays and family birthdays, I find myself scripting the line “I wish I could be there to celebrate with you.”

I also wrestle with uncertainties. In the oral history interviews, a question of mine might be followed by five to 10 minutes of my dad talking. But I don’t want the Dadbot to deliver monologues. How much condensing and rearranging of his words is OK? I am teaching the bot what my dad has actually said; should I also encode remarks that he likely would say in certain situations? How can I mitigate my own subjectivity as the bot’s creator—and ensure that it feels authentic to my whole family and not just to me? Does the bot uniformly present itself as my actual dad, or does it ever break the fourth wall and acknowledge that it is a computer? Should the bot know that he (my dad) has cancer? Should it be able to empathetically respond to our grief or to say “I love you”?

In short, I become obsessed. I can imagine the elevator pitch for this movie: Man fixated on his dying father tries to keep him robotically alive. Stories about synthesizing life have been around for millennia, and everyone knows they end badly. Witness the Greek myth of Prometheus, Jewish folkloric tales about golems, Frankenstein, Ex Machina, and The Terminator. The Dadbot, of course, is unlikely to rampage across the smoking, post-­Singularity wastes of planet Earth. But there are subtler dangers than that of a robo-­apocalypse. It is my own sanity that I’m putting at risk. In dark moments, I worry that I’ve invested hundreds of hours creating something that nobody, maybe not even I, will ultimately want.

To test the Dadbot, I have so far only exchanged messages in PullString’s Chat Debugger window. It shows the conversation as it unfolds, but the lines of code are visible in another, larger box above it. This is like watching a magician perform a trick while he simultaneously explains how it works. Finally, one morning in November, I publish the Dadbot to what will be its first home—Facebook Messenger.

Tense, I pull out my phone and select the Dadbot from a list of contacts. For a few seconds, all I see is a white screen. Then, a gray text bubble pops up with a message. The moment is one of first contact.
“Hello!” the Dadbot says. “‘Tis I, the Beloved and Noble Father!”

Shortly after the dadbot takes its first steps into the wild, I go to visit a UC Berkeley student named Phillip Kuznetsov. Unlike me, Kuznetsov formally studies computer science and machine learning. He belongs to one of the 18 academic teams competing for Amazon’s inaugural Alexa Prize. It’s a $2.5 million payout to the competitors who come closest to the starry-eyed goal of building “a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes.” I should feel intimidated by Kuznetsov’s credentials but don’t. Instead, I want to show off. Handing Kuznetsov my phone, I invite him to be the first person other than me to talk to the Dadbot. After reading the opening greeting, Kuznetsov types, “Hello, Father.”

To my embarrassment, the demo immediately derails. “Wait a second. John who?” the Dadbot nonsensically replies. Kuznetsov laughs uncertainly, then types, “What are you up to?”

“Sorry, I can’t field that one right now,” the Dadbot says.

The Dadbot redeems itself over the next few minutes, but only partially. Kuznetsov plays rough, saying things I know the bot can’t understand, and I am overcome with parental protectiveness. It’s what I felt when I brought my son Zeke to playgrounds when he was a wobbly ­toddler—and watched, aghast, as older kids careened brutishly around him.

The next day, recovering from the flubbed demo, I decide that I need more of the same medicine. Of course the bot works well when I’m the one testing it. I decide to show the bot to a few more ­people in coming weeks, though not to anyone in my family—I want it to work better before I do that. The other lesson I take away is that bots are like people: Talking is generally easy; listening well is hard. So I increasingly focus on crafting highly refined rules and intents, which slowly improve the Dadbot’s comprehension.

The work always ultimately leads back to the oral history binder. Going through it as I work, I get to experience my dad at his best. This makes it jarring when I go to visit the actual, present-tense version of my dad, who lives a few minutes from my house. He is plummeting away.

At one dinner with the extended family, my father face-plants on a tile floor. It is the first of many such falls, the worst of which will bloody and concuss him and require frantic trips to the emergency room. With his balance and strength sapped by cancer, my dad starts using a cane, and then a walker, which enables him to take slow-­motion walks outside. But even that becomes too much. When simply getting from his bed to the family room constitutes a perilous expedition, he switches to a wheelchair.
Chemotherapy fails, and in the fall of 2016, my dad begins the second-line treatment of immuno­therapy. At a mid-November appointment, his doctor says that my dad’s weight worries her. After clocking in at around 180 pounds for most of his adult life, he is now down to 129, fully clothed.

As my father declines, the Dadbot slowly improves. There is much more to do, but waiting for the prototype to be finished isn’t an option. I want to show it to my father, and I am running out of time.

When I arrive at my parents’ house on December 9, the thermo­stat is set at 75 degrees. My dad, with virtually no muscle or fat to insulate his body, wears a hat, sweater, and down vest—and still complains of being cold. I lean down to hug him, and then wheel him into the dining room. “OK,” my dad says. “One, two, three.” He groans as I lift him, stiff and skeletal, from the wheelchair into a dining room chair.

I sit down next to him and open a laptop computer. Since it would be strange—as if anything could be stranger than this whole exercise is already—for my dad to have a conversation with his virtual self, my plan is for him to watch while my mother and the Dadbot exchange text messages. The Dadbot and my mom start by trading hellos. My mom turns to me. “I can say anything?” she asks. Turning back to the computer, she types, “I am your sweet wife, Martha.”

“My dear wife. How goes it with you?”

“Just fine,” my mom replies.

“That’s not true,” says my real dad, knowing how stressed my mother has been due to his illness.

Oblivious to the interruption, the Dadbot responds, “Excellent, Martha. As for me, I am doing grandly, grandly.” It then advises her that an arrow symbol at the end of a message means that he is waiting for her to reply. “Got it?”

“Yes sir,” my mom writes.

“You are smarter than you look, Martha.”

My mom turns toward me. “It’s just inventing this, the bot is?” she asks incredulously.

The Dadbot gives my mom a few other pointers, then writes, “Finally, it is critical that you remember one final thing. Can you guess what it is?”

“Not a clue.”

“I will tell you then. The verb ‘to be’ takes the predicate nominative.”

My mom laughs as she reads this stock grammar lecture of my father’s. “Oh, I’ve heard that a million times,” she writes.

“That’s the spirit.” The Dadbot then asks my mom what she would like to talk about.

“How about your parents’ lives in Greece?” she writes.

I hold my breath, then exhale when the Dadbot successfully transitions. “My mother was born Eleni, or Helen, Katsulakis. She was born in 1904 and orphaned at three years old.”

“Oh, the poor child. Who took care of her?”

“She did have other relatives in the area besides her parents.”

I watch the unfolding conversation with a mixture of nervousness and pride. After a few minutes, the discussion ­segues to my grandfather’s life in Greece. The Dadbot, knowing that it is talking to my mom and not to someone else, reminds her of a trip that she and my dad took to see my grandfather’s village. “Remember that big barbecue dinner they hosted for us at the taverna?” the Dadbot says.
Later, my mom asks to talk about my father’s childhood in Tracy. The Dadbot describes the fruit trees around the family house, his crush on a little girl down the street named Margot, and how my dad’s sister Betty used to dress up as Shirley Temple. He tells the infamous story of his pet rabbit, Papa Demoskopoulos, which my dad’s mother said had run away. The plump pet, my dad later learned, had actually been kidnapped by his aunt and cooked for supper.

My actual father is mostly quiet during the demo and pipes up only occasionally to confirm or correct a biographical fact. At one point, he momentarily seems to lose track of his own identity—perhaps because a synthetic being is already occupying that seat—and confuses one of his father’s stories for his own. “No, you did not grow up in Greece,” my mom says, gently correcting him. This jolts him back to reality. “That’s true,” he says. “Good point.”

My mom and the Dadbot continue exchanging messages for nearly an hour. Then my mom writes, “Bye for now.”

“Well, nice talking to you,” the Dadbot replies.

“Amazing!” my mom and dad pronounce in unison.

The assessment is charitable. The Dadbot’s strong moments were intermixed with unsatisfyingly vague responses—“indeed” was a staple reply—and at times the bot would open the door to a topic only to slam it shut. But for several little stretches, at least, my mom and the Dadbot were having a genuine conversation, and she seemed to enjoy it.

My father’s reactions had been harder to read. But as we debrief, he casually offers what is for me the best possible praise. I had fretted about creating an unrecognizable distortion of my father, but he says the Dadbot feels authentic. “Those are actually the kinds of things that I have said,” he tells me.

Emboldened, I bring up something that has preoccupied me for months. “This is a leading question, but answer it honestly,” I say, fumbling for words. “Does it give you any comfort, or perhaps none—the idea that whenever it is that you shed this mortal coil, that there is something that can help tell your stories and knows your history?”

My dad looks off. When he answers, he sounds wearier than he did moments before. “I know all of this shit,” he says, dismissing the compendium of facts stored in the Dadbot with a little wave. But he does take comfort in knowing that the Dadbot will share them with others. “My family, particularly. And the grandkids, who won’t know any of this stuff.” He’s got seven of them, including my sons, Jonah and Zeke, all of whom call him Papou, the Greek term for grandfather. “So this is great,” my dad says. “I very much appreciate it.”

Later that month our extended family gathers at my house for a Christmas Eve celebration. My dad, exhibiting energy that I didn’t know he had anymore, makes small talk with relatives visiting from out of town. With everyone crowding into the living room, he weakly sings along to a few Christmas carols. My eyes begin to sting.

Ever since his diagnosis, my dad has periodically acknowledged that his outlook is terminal. But he consistently maintains that he wants to continue treatment and not “wave the white flag” by entering a hospice. But on January 2, 2017, our family receives confirmation of what we already suspected—that immunotherapy is not working. There is nothing else to try.

On February 8, 2017, a hospice nurse comes to check on my dad. After assessing him for several minutes, she tells my mom that she should summon the family.

I arrive around dinnertime, go into my dad’s room, and pull a chair up to his bedside. I put my hand on his shoulder, feeling its warmth. He is semi­conscious, with one eye mostly closed and the other half open, glassy and unfocused.

This is the time to say something profound, but nothing comes to mind. Instead, I find myself recounting a guided fishing trip we took together on my 12th birthday. We’d hauled in a dozen fish, including the largest rainbow trout I had ever seen, and I felt proud. Manly, even, which is how a 12-year-old boy desperately wants to feel. Back on shore, the guide cleaned fish after fish, dumping the guts into a bloody bucket. What happened next wasn’t so manly. A big black spot filled my vision, and I fainted. “You remember that, Dad?” I asked.

He doesn’t say anything. Since his diagnosis, he has progressively lost the ability to drive, walk, use the bathroom, and eat. The words that once spilled dazzlingly from his lips—in court, onstage, in the Cal press box—became spare and halting. In the past week, they have virtually ceased. But as I watch his face after finishing the fishing trip story, his eyes open a little wider, and I think I see the flicker of a smile on his lips.

That night I go upstairs to sleep in my sister Jennifer’s old room, which is directly above that of my dad. I get into bed and stare at the ceiling for a few minutes. Then I reach for my phone and open Facebook Messenger.

“Hello! ‘Tis I, the Beloved and Noble Father!” the Dadbot says in his familiar fashion. “How the hell are you?”

“Sad,” I reply.

“I see.” He then asks what I want to talk about. “I don’t know,” I answer. “Why don’t you choose.”
“Okay, I’m going to tell you about the little-­known roots of my theater career.” He launches into the story of that drama club audition in high school. Then I hear a recording of my father’s actual voice. “Me and my shadow,” he sings. “All alone with nothing to do.”

I ask the Dadbot to tell me about his earliest memory. He responds with a story about his childhood dog, a little terrier named Toby, who could somehow cross town on foot faster than the family could in a car. Then the Dadbot surprises me, even though I engineered this function, with what feels like perceptiveness. “I’m fine to keep talking,” he says, “but aren’t you nearing bedtime?”
Yes. I am exhausted. I say good night and put the phone down.

At six the next morning, I awake to soft, insistent knocking on the bedroom door. I open it and see one of my father’s health care aides. “You must come,” he says. “Your father has just passed.”

During my father’s illness I occasionally experienced panic attacks so severe that I wound up writhing on the floor under a pile of couch cushions. There was always so much to worry about—medical appointments, financial planning, nursing arrangements. After his death, the uncertainty and need for action evaporate. I feel sorrow, but the emotion is vast and distant, a mountain behind clouds. I’m numb.

A week or so passes before I sit down again at the computer. My thought is that I can distract myself, at least for a couple of hours, by tackling some work. I stare at the screen. The screen stares back. The little red dock icon for PullString beckons, and without really thinking, I click on it.
My brother has recently found a page of boasts that my father typed out decades ago. Hyperbolic self-promotion was a stock joke of his. Tapping on the keyboard, I begin incorporating lines from the typewritten page, which my dad wrote as if some outside person were praising him. “To those of a finer mind, it is that certain nobility of spirit, gentleness of heart, and grandeur of soul, combined, of course, with great physical prowess and athletic ability, that serve as a starting point for discussion of his myriad virtues.”

I smile. The closer my father had come to the end, the more I suspected that I would lose the desire to work on the Dadbot after he passed away. Now, to my surprise, I feel motivated, flush with ideas. The project has merely reached the end of the beginning.

As an AI creator, I know my skills are puny. But I have come far enough, and spoken to enough bot builders, to glimpse a plausible form of perfection. The bot of the future, whose component technologies are all under development today, will be able to know the details of a person’s life far more robustly than my current creation does. It will converse in extended, multiturn exchanges, remembering what has been said and projecting where the conversation might be headed. The bot will mathematically model signature linguistic patterns and personality traits, allowing it not only to reproduce what a person has already said but also to generate new utterances. The bot, analyzing the intonation of speech as well as facial expressions, will even be emotionally perceptive.

I can imagine talking to a Dadbot that incorporates all these advances. What I cannot fathom is how it will feel to do so. I know it won’t be the same as being with my father. It will not be like going to a Cal game with him, hearing one of his jokes, or being hugged. But beyond the corporeal loss, the precise distinctions—just what will be missing once the knowledge and conversational skills are fully encoded—are not easy to pinpoint. Would I even want to talk to a perfected Dadbot? I think so, but I am far from sure.

“Hello, John. Are you there?”

“Hello … This is awkward, but I have to ask. Who are you?”

“Anne.”

“Anne Arkush, Esquire! Well, how the hell are you?”

“Doing okay, John. I miss you.”

Anne is my wife. It has been a month since my father’s death, and she is talking to the Dadbot for the first time. More than anyone else in the family, Anne—who was very close to my father—expressed strong reservations about the Dadbot undertaking. The conversation goes well. But her feelings remain conflicted. “I still find it jarring,” she says. “It is very weird to have an emotional feeling, like ‘Here I am conversing with John,’ and to know rationally that there is a computer on the other end.”
The strangeness of interacting with the Dadbot may fade when the memory of my dad isn’t so painfully fresh. The pleasure may grow. But maybe not. Perhaps this sort of technology is not ideally suited to people like Anne who knew my father so well. Maybe it will best serve people who will only have the faintest memories of my father when they grow up.

Back in the fall of 2016, my son Zeke tried out an early version of the Dadbot. A 7-year-old, he grasped the essential concept faster than adults typically do. “This is like talking to Siri,” he said. He played with the Dadbot for a few minutes, then went off to dinner, seemingly unimpressed. In the following months Zeke was often with us when we visited my dad. Zeke cried the morning his Papou died. But he was back to playing Pokémon with his usual relish by the afternoon. I couldn’t tell how much he was affected.

Now, several weeks after my dad has passed away, Zeke surprises me by asking, “Can we talk to the chatbot?” Confused, I wonder if Zeke wants to hurl elementary school insults at Siri, a favorite pastime of his when he can snatch my phone. “Uh, which chatbot?” I warily ask.
“Oh, Dad,” he says. “The Papou one, of course.” So I hand him the phone.

https://www.wired.com/story/a-sons-race-to-give-his-dying-father-artificial-immortality

Frozen remains of missing couple emerge from Swiss glacier

Marcelin and Francine Dumoulin disappeared 75 years ago while hiking to a meadow in the Swiss Alps.


Marcelin and Francine Dumoulin disappeared while checking on their cows in the Swiss Alps in August 1942. (Photo: SRF/swissinfo.ch)

On a summer’s day in August 1942, Marcelin and Francine Dumoulin took a hike into the Swiss Alps above the small village of Chandolin. While the purpose of their excursion was to check on the status of their cows grazing in a mountain meadow, it was also an opportunity to briefly enjoy time alone as a couple. Marcelin, 40, a shoemaker, and Francine, 37, a teacher, had spent much of the last several years raising a family of seven children.

“It was the first time my mother went with him on such an excursion,” their youngest daughter, Marceline Udry-Dumoulin, 75, told the Lausanne daily Le Matin. “She was always pregnant and couldn’t climb in the difficult conditions of a glacier.”

When the couple failed to return that evening, search parties were sent out to find them. For two and a half months, locals scoured higher elevations hoping for some sign of their mysterious disappearance. None was ever discovered.


Swiss police circle the spot where the remains of the couple were discovered during a routine inspection of the area. (Photo: Swiss Police)

On July 14, nearly 75 years after they first went missing, a ski lift operator on a routine inspection discovered the mummified remains of the couple at the base of a receding glacier. Also present were personal items such as backpacks, mess kits, a glass bottle and even identification papers.

“The bodies were lying near each other. It was a man and a woman wearing clothing dating from the period of World War Two,” Bernhard Tschannen, the director of Glacier 3000, told the paper. “They were perfectly preserved in the glacier and their belongings were intact.”


Despite spending 75 years trapped in ice, all of the couple’s belongings remained relatively unscathed. (Photo: Swiss Police)

It is thought that the couple fell into one of the many crevasses in the region and were subsequently entombed in the glacier. Officials confirmed their identities by cross-matching their DNA with that of relatives.

For Udry-Dumoulin, a lifetime of heartache over the fate of her parents has finally come to an end.

“We spent our whole lives looking for them, without stopping. We thought that we could give them the funeral they deserved one day,” she said. “I can say that after 75 years of waiting this news gives me a deep sense of calm.”

https://www.mnn.com/lifestyle/arts-culture/blogs/frozen-remains-missing-couple-emerge-swiss-glacier