Watching TV makes Americans more passive and accepting of authority

 

Historically, television viewing has been used by various authorities to quiet potentially disruptive people—from kids, to psychiatric inpatients, to prison inmates. In 1992, Newsweek (“Hooking Up at the Big House”) reported, “Faced with severe overcrowding and limited budgets for rehabilitation and counseling, more and more prison officials are using TV to keep inmates quiet.” Joe Corpier, a convicted murderer, was quoted, “If there’s a good movie, it’s usually pretty quiet through the whole institution.” Both public and private-enterprise prisons have recognized that providing inmates with cable television can be a more economical method to keep them quiet and subdued than it would be to hire more guards.

Just as I have not emptied my refrigerator of beer, I have not gotten rid of my television, but I recognize the effects of beer and TV. During some dismal periods of my life, TV has been my “drug of choice,” and I’ve watched thousands of hours of TV sports and escapist crap. When I don’t need to take the edge off, I have watched Bill Moyers, Frontline, and other “good television.” But I don’t kid myself—the research show that the more TV of any kind we watch, the more passive most of us become.

American TV Viewing

Sociologist Robert Putnam in Bowling Alone (2000) reported that in 1950, about 10 percent of American homes had television sets, but this had grown to more than 99 percent. Putnam also reported that the number of TVs in the average U.S. household had grown to 2.24 sets, with 66 percent of households having three or more sets; the TV set is turned on in the average U.S. home for seven hours a day; two-thirds of Americans regularly watch TV during dinner; and about 40 percent of Americans’ leisure time is spent on television. And Putnam also reported that spouses spend three to four times more time watching television together than they do talking to each other.

In 2009, the Nielsen Company reported that U.S. TV viewing is at an all-time high, the average American viewing television 151 hours per month if one includes the following “three screens”: a television set, a laptop/personal computer, and a cell phone. This increase, according to Nielson, is part of a long-term trend attributable to not only greater availability of screens, increased variety of different viewing methods, more digital recorders, DVR, and TiVo devices but also a tanking economy creating the need for low-cost diversions. And in 2011, the New York Times reported, “Americans watched more television than ever in 2010, according to the Nielsen Company. Total viewing of broadcast networks and basic cable channels rose about 1 percent for the year, to an average of 34 hours per person per week.”

In February 2012, the New York Times reported that young people were watching slightly less television in 2011 than the record highs in 2010. In 2011, as compared to 2010, those 25-34 and 12-17 years of age were watching 9 minutes less a day, and 18-24 year olds were watching television 6 fewer minutes a day. Those 35 and older are spending slightly more time watching TV. However, there is some controversy about trends here, as the New York Times also reported: “According to data for the first nine months of 2011, children spent as much time in front of the television set as they did in 2010, and in some cases spent more. But the proportion of live viewing is shrinking while time-shifted viewing is expanding.”

Online television viewing is increasingly significant, especially so for young people. In one marketing survey of 1,000 Americans reported in 2010, 64% of said they watched at least some TV online. Among those younger than 25 in this survey, 83% watched at least some of their TV online, with 23% of this younger group watching “most” of their TV online, and 6% watching “all” of their TV online.
How does the United States compare to the rest of the world in TV viewing? There aren’t many cross-national studies, and precise comparisons are difficult because of different measurements and different time periods. NOP World, a market research organization, interviewed more than thirty thousand people in thirty countries in a study released in 2005, and reported that the United States was one of the highest TV-viewing nations. NationMaster.com, more than a decade ago, reporting on only the United States, Australia, and eleven European countries, found the following: the United States and the United Kingdom were the highest-viewing nations at 28 hours per week, with the lowest-viewing nations being Finland, Norway, and Sweden at 18 hours per week.

The majority of what Americans view on television—whether on the TV, lap top, or smart phone screen—is through channels owned by six corporations: General Electric (NBC, MSNBC, CNBC, Bravo, and SciFi); Walt Disney (ABC, the Disney Channel, A&E, and Lifetime); Rupert Murdoch’s News Corporation (Fox, Fox Business Channel, National Geographic, and FX); Time Warner (CNN, CW, HBO, Cinemax, Cartoon Network, TBS, TNT); Viacom (MTV, Nickelodeon/Nick-at-Nite, VH1, BET, Comedy Central); and CBS (CBS Television Network, CBS Television Distribution Group, Showtime, and CW, a joint venture with Time Warner). In addition to their television holdings, these media giants have vast holdings in radio, movie studios, and publishing.
However, while progressives lament the concentrated corporate control of the media, there is evidence that the mere act of watching TV—regardless of the content—may well have a primary pacifying effect.

Who among us hasn’t spent time watching a show that we didn’t actually like, or found ourselves flipping through the channels long after we’ve concluded that there isn’t anything worth watching?

Jerry Mander is a “reformed sinner” of sorts who left his job in advertising to publish Four Arguments for the Elimination of Television in 1978. He explains how viewers are mesmerized by what TV insiders call “technical events”—quick cuts, zoom-ins, zoom-outs, rolls, pans, animation, music, graphics, and voice-overs, all of which lure viewers to continue watching even though they have no interest in the content. TV insiders know that it’s these technical events—in which viewers see and hear things that real life does not present—that spellbind people to continue watching.

The “hold on us” of TV technical events, according to Robert Kubey and Mihaly Csikszentmihalyi’s 2002 Scientific American article “Television Addiction Is No Mere Metaphor,” is due to our “orienting response” —our instinctive reaction to any sudden or novel stimulus. They report that:In 1986 Byron Reeves of Stanford University, Esther Thorson of the University of Missouri and their colleagues began to study whether the simple formal features of television—cuts, edits, zooms, pans, sudden noises—activate the orienting response, thereby keeping attention on the screen. By watching how brain waves were affected by formal features, the researchers concluded that these stylistic tricks can indeed trigger involuntary responses and “derive their attentional value through the evolutionary significance of detecting movement. . . . It is the form, not the content, of television that is unique.”

Kubey and Csikszentmihalyi claim that TV addiction is “no mere metaphor” but is, at least psychologically, similar to drug addiction. Utilizing their Experience Sampling Method (in which participants carried a beeper and were signaled six to eight times a day at random to report their activity), Kubey and Csikszentmihalyi found that almost immediately after turning on the TV, subjects reported feeling more relaxed, and because this occurs so quickly and the tension returns so rapidly after the TV is turned off, people are conditioned to associate TV viewing with a lack of tension. They concluded: Habit-forming drugs work in similar ways. A tranquilizer that leaves the body rapidly is much more likely to cause dependence than one that leaves the body slowly, precisely because the user is more aware that the drug’s effects are wearing off.

Similarly, viewers’ vague learned sense that they will feel less relaxed if they stop viewing may be a significant factor in not turning the set off. Mander documents research showing that regardless of the programming, viewers’ brainwaves slow down, transforming them to a more passive, nonresistant state. In one study that Mander reports comparing brainwave activity in reading versus television watching, it was found the brain’s response to reading is more active, unlike the passive response to television—this no matter what the TV content. Comparing  the brain effects of TV viewing to reading, Kubey and Csikszentmihalyi report similar EEG results as measured by alpha brain-wave production. Maybe that’s why when I view a fantastic Bill Moyers interview on TV, I can recall almost nothing except that I enjoyed it; this in contrast to how many content specifics I can remember when I read a transcript of a Moyers interview.

Kubey and Csikszentmihalyi’s survey also revealed that: The sense of relaxation ends when the set is turned off, but the feelings of passivity and lowered alertness continue. Survey participants commonly reflect that television has somehow absorbed or sucked out their energy, leaving them depleted. They say they have more difficulty concentrating after viewing than before. In contrast, they rarely indicate such difficulty after reading. Mander strongly disagrees with the idea that TV is merely a window throughwhich any perception, any argument, or reality may pass. Instead, he claims TV is inherently biased by its technology. For a variety of technical reasons, including TV’s need for sharp contrast to maintain interest, Mander explains that authoritarian-based programming is more technically interesting to viewers than democracy-based programming. War and violence may be unpleasant in real life; however, peace and cooperation make for “boring television.” And charismatic authority figures are more “interesting” on TV than are ordinary citizens debating issues.

In a truly democratic society, one is gaining knowledge directly through one’s own experience with the world, not through the filter of an authority or what Mander calls a mediated experience. TV-dominated people ultimately accept others’ mediated version of the world rather than discovering their own version based on their own experiences. Robert Keeshan, who played Captain Kangaroo in the long-running children’s program, was critical of television—including so-called “good television”— in a manner rarely heard from those who work in it:When you are spending time in front of the television, you are not doing other things.

The young child of three or four years is in the stage of the greatest emotional development that human beings undergo. And we only develop when we experience things, real-life things: a conversation with Mother, touching Father, going places, doing things, relating to others. This kind of experience is critical to a young child, and when the child spends thirty-five hours per week in front of the TV set, it is impossible to have the full range of real-life experience that a young child must have. Even if we had an overabundance of good television programs, it wouldn’t solve the problem. Whatever the content of the program, television watching is an isolating experience. Most people are watching alone, but even when watching it with others, they are routinely glued to the TV rather than interacting with one another.

TV keeps us indoors, and it keeps us from mixing it up in real life. People who are watching TV are isolated from other people, from the natural world—even from their own thoughts and senses. TV creates isolation, and because it also reduces our awareness of our own feelings, when we start to feel lonely we are tempted to watch more so as to dull the ache of isolation. Television is a “dream come true” for an authoritarian society. Those with the most money own most of what people see. Fear-based TV programming makes people more afraid and distrustful of one another, which is good for an authoritarian society depending on a “divide and conquer” strategy. Television isolates people so they are not joining together to govern themselves. Viewing television puts one in a brain state that makes it difficult to think critically, and it quiets and subdues a population. And spending one’s free time isolated and watching TV interferes with the connection to one’s own humanity, and thus makes it easier to accept an authority’s version of society and life.

Whether it is in American penitentiaries or homes, TV is a staple of American pacification. When there’s no beer in our refrigerators, when our pot hookup has been busted, and when we can’t score a psychotropic drug prescription, there is always TV to take off the edge and chill us.

http://www.salon.com/2012/10/30/does_tv_actually_brainwash_americans/

Thanks to SRW for bringing this to the attention of the It’s Interesting community.

South Africans offered free phone for every 60 rats caught

 

Alexandra’s discarded piles of rotting food and leaking sewage have become a perfect breeding ground for rats.
 
As it was in medieval Hamelin, so it is today in the South African township of Alexandra: wherever you go, you are never far from a rat.

But residents of the Johannesburg suburb have been offered a deal unavailable in the era of the Pied Piper – a free mobile phone for every resident who catches 60 of the rodents.

Alexandra has just turned 100 years old and was the young Nelson Mandela’s first home when he moved to Johannesburg. Its cramped shacks and illegal rubbish dumps sit in brutal contrast with neighbouring Sandton, dubbed the wealthiest square mile in Africa.

The crumbling structures, leaking sewage and discarded piles of rotting food are a perfect breeding ground for rats, much to the torment of residents. There have reportedly been cases of children’s fingers being bitten while they sleep.

In an attempt to fight back, city officials have distributed cages and the mobile phone company 8ta has sponsored the volunteer ratcatchers.

Resident Joseph Mothapo says he has won two phones and plans to get one for each member of his family. “It’s easy,” he told South Africa’s Mail & Guardian newspaper, wielding a large cage containing rats. “You put your leftover food inside and the rats climb in, getting caught as the trap door closes.”

But there were signs that the PR stunt could backfire, as animals rights activists criticised the initiative on social networks.

On Monday 8ta denied responsibility for the initiative. It said it was a long-time sponsor of a charity called Lifeline, which had taken the decision to hand out phones.

“You will have to ask Lifeline why they decided to use these promotional products,” said Pynee Chetty, an 8ta spokesman. “They do a lot of good community work, including in Alexandra. They used the promotional material to incentivise members of the community. I wasn’t aware this is how they were going to resolve the problem [of rats].”

He added: “We won’t distance ourselves from Lifeline. It is a charity that does a lot of good work and our support for them is steadfast. I don’t want to deny the story. What I’m saying is that it’s not our initiative.”

The Mail and Guardian reported that thousands of rats have been gassed to death by a specialist, Ashford Sidzumo, at the local sports centre. “We record all the people’s details so we can see where the rats are causing the biggest problem,” he was quoted as saying. “We use this to send fumigation teams there.”

Local councillor Julie Moloi told the Mail & Guardian there had been no choice but to carry out the drastic experiment. “We are afraid these rats will take over Alex and it will become a city of rats,” she said.

In another measure, owls have been given to three local schools because of their rat-catching prowess. But wider deployment of the birds may be difficult: Moloi said people kill them because of traditional beliefs that they are to be feared.

‘Smart Carpet’ detects falls and unfamiliar footsteps

A team at the University of Manchester in the UK has developed a carpet that can detect when someone has fallen over or when unfamiliar feet walk across it.

Optical fibres in the carpet’s underlay create a 2D pressure map that distorts when stepped on. Sensors around the carpet’s edges then relay signals to a computer which is used to analyse the footstep patterns. When a change is detected – such as a sudden stumble and fall – an alarm can be set to sound.

By monitoring footsteps over time, the system can also learn people’s walking patterns and watch out for subtle changes, such as a gradual favouring of one leg over the other. It could then be used to predict the onset of mobility problems in the elderly, for example.

The carpet could also be used as an intruder alert, says team member Patricia Scully. “In theory, we could identify footsteps of individuals and the shoes they are wearing,” she says.

But it needn’t all be about feet. The system is designed to be versatile, meaning that different sensors could instead be used to provide early warning of chemical spillages or fire.

http://www.newscientist.com/blogs/onepercent/2012/09/smart-carpet-detects-falls—a.html?DCMP=OTC-rss&nsref=online-news

DNA is the future of data storage

A bioengineer and geneticist at Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record by a thousand times.

The work, carried out by George Church and Sri Kosuri, basically treats DNA as just another digital storage device. Instead of binary data being encoded as magnetic regions on a hard drive platter, strands of DNA that store 96 bits are synthesized, with each of the bases (TGAC) representing a binary value (T and G = 1, A and C = 0).

To read the data stored in DNA, you simply sequence it — just as if you were sequencing the human genome — and convert each of the TGAC bases back into binary. To aid with sequencing, each strand of DNA has a 19-bit address block at the start (the red bits in the image below) — so a whole vat of DNA can be sequenced out of order, and then sorted into usable data using the addresses.

Scientists have been eyeing up DNA as a potential storage medium for a long time, for three very good reasons: It’s incredibly dense (you can store one bit per base, and a base is only a few atoms large); it’s volumetric (beaker) rather than planar (hard disk); and it’s incredibly stable — where other bleeding-edge storage mediums need to be kept in sub-zero vacuums, DNA can survive for hundreds of thousands of years in a box in your garage.

It is only with recent advances in microfluidics and labs-on-a-chip that synthesizing and sequencing DNA has become an everyday task, though. While it took years for the original Human Genome Project to analyze a single human genome (some 3 billion DNA base pairs), modern lab equipment with microfluidic chips can do it in hours. Now this isn’t to say that Church and Kosuri’s DNA storage is fast — but it’s fast enough for very-long-term archival.

Just think about it for a moment: One gram of DNA can store 700 terabytes of data. That’s 14,000 50-gigabyte Blu-ray discs… in a droplet of DNA that would fit on the tip of your pinky. To store the same kind of data on hard drives — the densest storage medium in use today — you’d need 233 3TB drives, weighing a total of 151 kilos. In Church and Kosuri’s case, they have successfully stored around 700 kilobytes of data in DNA — Church’s latest book, in fact — and proceeded to make 70 billion copies (which they claim, jokingly, makes it the best-selling book of all time!) totaling 44 petabytes of data stored.

Looking forward, they foresee a world where biological storage would allow us to record anything and everything without reservation. Today, we wouldn’t dream of blanketing every square meter of Earth with cameras, and recording every moment for all eternity/human posterity — we simply don’t have the storage capacity. There is a reason that backed up data is usually only kept for a few weeks or months — it just isn’t feasible to have warehouses full of hard drives, which could fail at any time. If the entirety of human knowledge — every book, uttered word, and funny cat video — can be stored in a few hundred kilos of DNA, it might just be possible to record everything.

http://refreshingnews99.blogspot.in/2012/08/harvard-cracks-dna-storage-crams-700.html

Thanks to kebmodee for bringing this to the attention of the It’s Interesting community.

Adidas develops social-media sneakers

Adidas is coming out with the Social Media Shoe. Thanks to the help of customizer NASH Money, adidas has managed to inject a healthy dose of technology into a 2012 adidas Barricade tennis sneaker.  The Adidas Social Media Shoe will merge an Arduino unit, a LCD display, and LED lighting. The external LCD display will show off relevant information to the user, while personalized software will poll the Twitter API’s to share specific data on the shoe screen.

http://www.coolest-gadgets.com/20120811/adidas-social-media-shoe/

Disney Is ‘Face Cloning’ People to Create Terrifyingly Realistic Robots

The Hall of Presidents is about to get a whole lot creepier, at least if Disney’s researchers get their way. That’s because they’re “face cloning” people at a lab in Zurich in order to create the most realistic animatronic characters ever made.

First of all, yes, Disney has a laboratory in Zurich. It’s one of six around the world where the company researches things like computer graphics, 3D technology and, I can only assume, how to most efficiently suck money out of your pocket when you visit Disneyworld.

What does “physical face cloning” involve? Researchers used video cameras to capture several expressions on a subject’s face, recreating them in 3D computer models down to individual wrinkles and facial hair. They then experimented with different thicknesses of silicon for each part of the face until they could create a mold for the perfect synthetic skin.

They slapped that silicone skin on a 3D-printed model of the subject’s head to create their very own replicant. As the authors of the study point out (PDF), it’s not all that different from creating a 3D model for a Pixar movie, except that in real life you have to worry about things like materials and the motors that make the face change expressions.

The plan is to develop a “complete process for automating the physical reproduction of a human face on an animatronics device,” meaning all you’ll have to do in the future is record a person’s face and the computer will do the rest. This is a different process than the one used to make the famous Geminoid robots from Osaka University, whose skin is individually crafted by artists through trial and error.

The next step is developing more advanced actuators and multi-layered synthetic skin to give the researchers more degrees of freedom in mimicking facial expressions. That means next time you go on the Pirates of the Caribbean ride, don’t be surprised to see a terrifyingly realistic Johnny Depp-bot cavorting with an appropriately dead-eyed Orlando Bloom.

Read more: http://techland.time.com/2012/08/15/disney-is-face-cloning-people-to-create-terrifyingly-realistic-robots/?iid=tl-article-latest#ixzz23fBwVu61

Retinal device restores sight to blind mice

 

Researchers report they have developed in mice what they believe might one day become a breakthrough for humans: a retinal prosthesis that could restore near-normal sight to those who have lost their vision.

That would be a welcome development for the roughly 25 million people worldwide who are blind because of retinal disease, most notably macular degeneration.

The notion of using prosthetics to combat blindness is not new, with prior efforts involving retinal electrode implantation and/or gene therapy restoring a limited ability to pick out spots and rough edges of light.

The current effort takes matters to a new level. The scientists fashioned a prosthetic system packed with computer chips that replicate the “neural impulse codes” the eye uses to transmit light signals to the brain.

“This is a unique approach that hasn’t really been explored before, and we’re really very excited about it,” said study author Sheila Nirenberg, a professor and computational neuroscientist in the department of physiology and biophysics at Weill Medical College of Cornell University in New York City. “I’ve actually been working on this for 10 years. And suddenly, after a lot of work, I knew immediately that I could make a prosthetic that would work, by making one that could take in images and process them into a code that the brain can understand.”

Nirenberg and her co-author Chethan Pandarinath (a former Cornell graduate student now conducting postdoctoral research at Stanford University School of Medicine) report their work in the Aug. 14 issue of Proceedings of the National Academy of Sciences. Their efforts were funded by the U.S. National Institutes of Health and Cornell University’s Institute for Computational Biomedicine.

The study authors explained that retinal diseases destroy the light-catching photoreceptor cells on the retina’s surface. Without those, the eye cannot convert light into neural signals that can be sent to the brain.

However, most of these patients retain the use of their retina’s “output cells” — called ganglion cells — whose job it is to actually send these impulses to the brain. The goal, therefore, would be to jumpstart these ganglion cells by using a light-catching device that could produce critical neural signaling.

But past efforts to implant electrodes directly into the eye have only achieved a small degree of ganglion stimulation, and alternate strategies using gene therapy to insert light-sensitive proteins directly into the retina have also fallen short, the researchers said.

Nirenberg theorized that stimulation alone wasn’t enough if the neural signals weren’t exact replicas of those the brain receives from a healthy retina.

“So, what we did is figure out this code, the right set of mathematical equations,” Nirenberg explained. And by incorporating the code right into their prosthetic device’s chip, she and Pandarinath generated the kind of electrical and light impulses that the brain understood.

The team also used gene therapy to hypersensitize the ganglion output cells and get them to deliver the visual message up the chain of command.

Behavioral tests were then conducted among blind mice given a code-outfitted retinal prosthetic and among those given a prosthetic that lacked the code in question.

The result: The code group fared dramatically better on visual tracking than the non-code group, with the former able to distinguish images nearly as well as mice with healthy retinas.

“Now we hope to move on to human trials as soon as possible,” said Nirenberg. “Of course, we have to conduct standard safety studies before we get there. And I would say that we’re looking at five to seven years before this is something that might be ready to go, in the best possible case. But we do hope to start clinical trials in the next one to two years.”

Results achieved in animal studies don’t necessarily translate to humans.

Dr. Alfred Sommer, a professor of ophthalmology at Johns Hopkins University in Baltimore and dean emeritus of Hopkins’  Bloomberg School of Public Health, urged caution about the findings.

“This could be revolutionary,” he said. “But I doubt it. It’s a very, very complicated business. And people have been working on it intensively and incrementally for the last 30 years.”

“The fact that they have done something that sounds a little bit better than the last set of results is great,” Sommer added.  “It’s terrific. But this approach is really in its infancy. And I guarantee that it will be a long time before they get to the point where they can really restore vision to people using prosthetics.”

Other advances may offer benefits in the meantime, he said. “We now have new therapies that we didn’t have even five years ago,” Sommer said. “So we may be reaching a state where the amount of people losing their sight will decline even as these new techniques for providing artificial vision improve. It may not be as sci-fi. But I think it’s infinitely more important at this stage.”

http://health.usnews.com/health-news/news/articles/2012/08/13/retinal-device-restores-sight-to-blind-mice

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Texas man texts about danger of texting while driving just before he plunges off cliff

A college student from Texas believes he is lucky to be alive after a terrible crash. He was texting and driving when his truck flew off of a cliff.

Chance Bothe’s truck plunged off of a bridge and into a ravine. One of the last things he typed indicated what almost happened to him.

He wrote, “I need to quit texting, because I could die in a car accident.”

After the crash, Chance had a broken neck, a crushed face, a fractured skull, and traumatic brain injuries. Doctors had to bring him back to life three times . Now, 6 months later, he’s finally able to talk about what happened.

“They just need to understand, don’t do it. Don’t do it. It’s not worth losing your life,” he said. “I went to my grandmother’s funeral not long ago, and I kept thinking, it kept jumping into my head, I’m surprised that’s not me up in that casket. I came very close to that, to being gone forever.”

Chance’s father said, if he had a child just learning to drive, he would disable texting and Internet on their phone.

As of August 1st, drivers in Alabama will face a $25 fine the first time they are caught texting behind the wheel.

Narrative Science: Can computers write convincing journalism stories?

Computer applications can drive cars, fly planes, play chess and even make music.

But can an app tell a story?

Chicago-based company Narrative Science has set out to prove that computers can tell stories good enough for a fickle human audience. It has created a program that takes raw data and turns it into a story, a system that’s worked well enough for the company to earn its own byline on Forbes.com.

Kristian Hammond, Narrative Science’s chief technology officer, said his team started the program by taking baseball box scores and turning them into game summaries.

“We did college baseball,” Hammond recalled. “And we built out a system that would take box scores and historical information, and we would write a game recap after a game. And we really liked it.”

Narrative Science then began branching out into finance and other topics that are driven heavily by data. Soon, Hammond says, large companies came looking for help sorting huge amounts of data themselves.

“I think the place where this technology is absolutely essential is the area that’s loosely referred to as big data,” Hammond said. “So almost every company in the world has decided at one point that in order to do a really good job, they need to meter and monitor everything.”

Narrative Science hasn’t disclosed how much money is being made or whether a profit is being turned with the app. The firm employs about 30 people. At least one other company, based in North Carolina, is working on similar technology.

Meanwhile, Hammond says Narrative Science is looking to eventually expand into long form news stories.

That’s an idea that’s unsettling to some journalism experts.

Kevin Smith, head of the Society of Professional Journalists Ethics Committee, says he laughed when he heard about the program.

“I can remember sitting there doing high school football games on a Friday night and using three-paragraph formulas,” Smith said. “So it made me laugh, thinking they have made a computer that can do that work.”

Smith says that, ultimately, it’s going to be hard for people to share the uniquely human custom of story telling with a machine.

“I can’t imagine that a machine is going to tell a story and present it in a way that other human beings are going to accept it,” he said. “At least not at this time. I don’t see that happening. And the fact that we’re even attempting to do it — we shouldn’t be doing it.”

Other experts are not as concerned. Greg Bowers, who teaches at the Missouri School of Journalism, says computers don’t have the same capacity for pitch, emotion and story structure.

“I’m not alarmed about it as some people are,” Bowers said. “If you’re writing briefs that can be easily replicated by a computer, then you’re not trying hard enough.”

http://www.cnn.com/2012/05/11/tech/innovation/computer-assisted-writing/index.html?hpt=hp_c2

Japanese Remote Hand Shaking

Japanese scientists at Osaka University have created a robot hand so people can shake hands with someone remotely. The robot hand communicates grip force, body temperature and touch. The creators are considering building telepresence robots with the robot hand so they can shake hands with people.
The creators of the robot hand say, “People have the preconceived notion that a robot hand will feel cold, so we give it a temperature slightly higher than skin temperature.”

http://www.sciencespacerobots.com/blog/32820121

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.