Human Immortality in 33 Years Claims Dmitry Itskov’s 2045 Initiative

IMAGE634795077231612631

Although James Cameron’s “Avatar” took place more than 140 years into the future, a Russian billionaire has teamed with dozens of scientists to lay out a plan that would use avatars to transfer human consciousness into an artificial form. The goal: human immortality by 2045.

The 2045 Initiative, a life-extension project founded by 31-year-old Russian billionaire Dmitry Itskov in February 2011, offers a timeline for immortality over the next 33 years. Beginning with remotely controlled robotic avatars and re-creating the human brain through computer models, the end result would be human immortality in the form of holographic avatars.

The 2045 Initiative, which has had a major social media blitz, brought together 30 top Russian scientists to develop the “imortal” technology, laying out the plan for human immortality on its website.

“The first phase is to create a humanoid robot dubbed ‘avatar,’ and a state-of-the-art brain-computer interface system. The next phase consists of creating a life-support system for the human brain and connect it to the ‘avatar.’ The final phase … is to create an artificial brain in which to transfer the original individual consciousness into,” reads the plan.

Here’s the 2045 Initiative’s timeline:

2015 – 2020: A robotic copy of a human body remotely controlled by a brain-computer interface

2020 – 2025: An avatar is created in which a human brain can be transplanted at the end of life

2030 – 2035: An avatar that can now contain an artificial brain in which a human personality can be transferred at the end of life

2040 – 2045: A holographic avatar emerges

In addition to taking the Internet by storm, the 2045 Initiative has also launched its own political party, called Evolution 2045, pushing a new strategy for human development. The Russia-based party takes a global approach, encouraging other countries to follow in its footsteps “not in the arms race, but in the race for building a bright future for mankind.”

Last month Itskov appealed to members of the Forbes World’s Billionaires List, urging them to take heed of the “vital importance of funding scientific development in the field of cybernetic immortality and the artificial body.

“Contributing to cutting-edge innovations in the fields of neuroscience, nanotechnology and android robotics is more than building a brighter future for human civilization. [It’s also] a wise and profitable business strategy that will create a new and vibrant industry of immortality — limitless in its importance and scale. This kind of investment will change every aspect of business as we know it,” read Itskov’s open letter.

Itskov plans to host a Global Future Congress meeting next year in New York. A previous event was held last February in Moscow.

This massive hypothetical technology would give the “new” mankind amazing survival abilities, according to the Initiative.

“The new human being will receive a huge range of abilities and will be capable of withstanding extreme external conditions easily: high temperatures, pressure, radiation, lack of oxygen,” claims the Initiative on its website.

And according to a slick video produced by the 2045 Initiative, once the hologram-like avatar reinvents mankind’s scientific and social structure, war and violence will go by the wayside as “spiritual self-improvement” becomes mankind’s primary goal.

But what about the average Joe who just wants to live forever?

The cost of these avatars should be on par with that of an automobile, the Initiative’s website assures, as soon as mass production begins.

http://abcnews.go.com/blogs/technology/2012/08/human-immortality-in-33-years-claims-dmitry-itskovs-2045-initiative/#.UOcJvQYxrPo.email

Thanks to H.M. for bringing this to the attention of the It’s Interesting community.

Patented Book Writing System Creates, Sells Hundreds Of Thousands Of Books On Amazon

bookracks

Philip M. Parker, Professor of Marketing at INSEAD Business School, has had a side project for over 10 years. He’s created a computer system that can write books about specific subjects in about 20 minutes. The patented algorithm has so far generated hundreds of thousands of books. In fact, Amazon lists over 100,000 books attributed to Parker, and over 700,000 works listed for his company, ICON Group International, Inc. This doesn’t include the private works, such as internal reports, created for companies or licensing of the system itself through a separate entity called EdgeMaven Media.

Parker is not so much an author as a compiler, but the end result is the same: boatloads of written works.

Now these books aren’t your typical reading material. Common categories include specialized technical and business reports, language dictionaries bearing the “Webster’s” moniker (which is in the public domain), rare disease overviews, and even crossword puzzle books for learning foreign languages, but they all have the same thing in common: they are automatically generated by software.

The system automates this process by building databases of information to source from, providing an interface to customize a query about a topic, and creating templates for information to be packaged. Because digital ebooks and print-on-demand services have become commonplace, topics can be listed in Amazon without even being “written” yet.

The abstract for the U.S. patent issued in 2007 describes the system:

The present invention provides for the automatic authoring, marketing, and or distributing of title material. A computer automatically authors material. The material is automatically formatted into a desired format, resulting in a title material. The title material may also be automatically distributed to a recipient. Meta material, marketing material, and control material are automatically authored and if desired, distributed to a recipient. Further, the title may be authored on demand, such that it may be in any desired language and with the latest version and content.

To be clear, this isn’t just software alone but a computer system designated to write for a specific genre. The system’s database is filled with genre-relevant content and specific templates coded to reflect domain knowledge, that is, to be written according to an expert in that particular field/genre. To avoid copyright infringement, the system is designed to avoid plagiarism, but the patent aims to create original but not necessarily creative works. In other words, if any kind of content can be broken down into a formula, then the system could package related, but different content in that same formula repeatedly ad infinitum.

The success (and brilliance) of this system is that Parker designed the algorithms to mimic the thought process that an expert would necessarily go through in writing about a topic. It merely involves deconstructing content within a genre. He has some experience in this, as he has written at least three books the old fashioned way. It’s the recognition of how algorithmic content creation is (for the most part) that allows it to be coded as artificial intelligence.

A sampling of the list of books attributed to Parker is instructive:

– Webster’s Slovak – English Thesaurus Dictionary for $28.95
– The 2007-2012 World Outlook for Wood Toilet Seats for $795
– The World Market for Rubber Sheath Contraceptives (Condoms): A 2007 Global Trade Perspective for $325
– Ellis-van Creveld Syndrome – A Bibliography and Dictionary for Physicians, Patients, and Genome Researchers for $28.95
– Webster’s English to Haitian Creole Crossword Puzzles: Level 1 for $14.95

Considering that a single book costs somewhere between $0.20 to $0.50 to produce (the cost of electricity and hardware), the prices shown are considerably profit, even if very few of them are sold.

In truth, many nonfiction books — like news articles — often fall into formulas that cover the who, what, where, when, and why of a topic, perhaps the history or projected future, and some insight. Regardless of how topical information is presented or what comes with it, the core data must be present, even for incredibly obscure topics. And Parker is not alone in automating content either. The Chicago-based Narrative Science has been producing sport news and financial articles for Forbes for a while.

So, what’s the next book genre Parker is targeting to have software produce? Romance novels.

Although a novel is a work of fiction, it’s no secret that certain genres lend themselves to formulas, such as romance novels. That may not make these works rank high for their literary value, but they certainly do well for their entertainment value. Somewhat suprisingly, romance fiction has the largest share of the consumer book market with revenue of nearly $1.37 billion in 2011.

But can artificial intelligence produce creative works on par with what a human can produce? Yes…eventually. Perhaps the better questions are how soon will it happen and how relevant will they be? The answers may be right on the horizon if Parker can churn out romance novels that are read by the masses. Frankly, any creative work produced by artificial intelligence will be “successful” if it reads like a human being wrote it, or more precisely, like a human intelligence is behind the work.

But books may be just the beginning.

As Parker notes in his video, the software doesn’t have to be limited to written works. Using 3D animation and avatars, a variety of audio and video formats can be generated, and Parker indicates that these are being explored. Avatars that read compiled news stories might become preferred, especially if viewers were allowed to customize who reads the news to them and how in-depth those stories need to be.

Content creation technology could converge with other developments such as automated video transcription to expand the content that can be pulled from. Language translators would aid not only in content previously produced all over the world, but audio and video in real-time as well. Additionally, with lifeblogging allowing people to capture everything they say or is said to them, those could be packaged into personal biographies. If you add big data and analytics into the mix, you could have some serious content creation capabilities, all performed by designated computers.

The future of content is increasingly becoming the stuff of science fiction, but we still have some years before content creation is entirely in the hands of software. But if you have any doubts about where we are headed, consider this: the first novel written by a computer has already been published four years ago.

Patented Book Writing System Creates, Sells Hundreds Of Thousands Of Books On Amazon

IBM predicts computers will have the five human senses within five years

s

Every year, IBM releases its “5 in 5″ — five technologies that it predicts will change the world in the next five years. This year, IBM is taking on the five senses and how we can make our computers work more like a human being. Touch, sight, hearing, taste, and smell are all on the table, and IBM has five profiles of its employees researching how computers will use these senses going forward.

Robyn Schwartz, an Associate Director at IBM, explains sense of touch, and how vibrations on handheld devices like smartphones can be used to convey texture. Simply by having a predefined and widely understood conversion of real-life touch to vibration patterns, we can simulate touch digitally.

These technologies aren’t just for fun, though — they can save lives. The sight technology focuses on computers being able to distinguish important information in images. During a disaster or tragedy, people posting smartphone pictures to services like Twitter could actually be used to help emergency agencies analyze the problem, and work out better solutions. The sound technology could analyze the creaking of buildings and bridges, and predict failure before anyone is harmed. Based on the odors your body creates, your doctor could use the smelling technology to diagnose a whole range of diseases before traditional methods could detect them. These are important technologies, and these researchers are absolutely changing the world.

The sense of taste technology is perhaps the most interesting out of the five. The developed world has an obesity issue, and the taste research is being used to fight this. Instead of just expecting people to eat healthily and reject junk food, this research is examining how humans taste and experience food on a personal level. Genetics and environment drastically alter how individuals taste different types of food. When we completely understand what drives humans to want certain foods while rejecting others, we can tailor our meals in ways that satisfy our individual cravings while providing balanced nutrition. Using this technology, and introducing it to children at a grade school level, could help solve our severe obesity problem. Adults are often difficult to influence, but the next generation can truly benefit from computer-optimized meals.

IBM does make a compelling case, but if you’re skeptical about these predictions, you can view the current progress of IBM’s previous ones. Even last year’s predictions are well on their way to becoming reality.

For more information, check out IBM’s list of the five technologies, which has an article and video available with details for each one.

http://www.extremetech.com/extreme/143478-ibm-predicts-computers-will-have-the-five-human-senses-within-five-years

Ray Kurzweil joins Google

Ray-Kurzweil-singularity

The most well known advocate of the Singularity is Ray Kurzweil, who Bill Gates has called one of the best thinkers of the future of technology.

Ray Kurzweil confirmed today that he will be joining Google to work on new projects involving machine learning and language processing.

“I’m excited to share that I’ll be joining Google as Director of Engineering this Monday, December 17,” said Kurzweil.

“I’ve been interested in technology, and machine learning in particular, for a long time: when I was 14, I designed software that wrote original music, and later went on to invent the first print-to-speech reading machine for the blind, among other inventions. I’ve always worked to create practical systems that will make a difference in people’s lives, which is what excites me as an inventor.

“In 1999, I said that in about a decade we would see technologies such as self-driving cars and mobile phones that could answer your questions, and people criticized these predictions as unrealistic. Fast forward a decade — Google has demonstrated self-driving cars, and people are indeed asking questions of their Android phones. It’s easy to shrug our collective shoulders as if these technologies have always been around, but we’re really on a remarkable trajectory of quickening innovation, and Google is at the forefront of much of this development.

“I’m thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade’s ‘unrealistic’ visions into reality.”

http://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involving-machine-learning-and-language-processing

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Do we live in a computer simulation? UW researchers say idea can be tested

universesimulation

The conical (red) surface shows the relationship between energy and momentum in special relativity, a fundamental theory concerning space and time developed by Albert Einstein, and is the expected result if our universe is not a simulation. The flat (blue) surface illustrates the relationship between energy and momentum that would be expected if the universe is a simulation with an underlying cubic lattice.

A decade ago, a British philosopher put forth the notion that the universe we live in might in fact be a computer simulation run by our descendants. While that seems far-fetched, perhaps even incomprehensible, a team of physicists at the University of Washington has come up with a potential test to see if the idea holds water.

The concept that current humanity could possibly be living in a computer simulation comes from a 2003 paper published in Philosophical Quarterly by Nick Bostrom, a philosophy professor at the University of Oxford. In the paper, he argued that at least one of three possibilities is true:

  • The human species is likely to go extinct before reaching a “posthuman” stage.
  • Any posthuman civilization is very unlikely to run a significant number of simulations of its evolutionary history.
  • We are almost certainly living in a computer simulation.

He also held that “the belief that there is a significant chance that we will one day become posthumans who run ancestor simulations is false, unless we are currently living in a simulation.”

With current limitations and trends in computing, it will be decades before researchers will be able to run even primitive simulations of the universe. But the UW team has suggested tests that can be performed now, or in the near future, that are sensitive to constraints imposed on future simulations by limited resources.

Currently, supercomputers using a technique called lattice quantum chromodynamics and starting from the fundamental physical laws that govern the universe can simulate only a very small portion of the universe, on the scale of one 100-trillionth of a meter, a little larger than the nucleus of an atom, said Martin Savage, a UW physics professor.

Eventually, more powerful simulations will be able to model on the scale of a molecule, then a cell and even a human being. But it will take many generations of growth in computing power to be able to simulate a large enough chunk of the universe to understand the constraints on physical processes that would indicate we are living in a computer model.

However, Savage said, there are signatures of resource constraints in present-day simulations that are likely to exist as well in simulations in the distant future, including the imprint of an underlying lattice if one is used to model the space-time continuum.

The supercomputers performing lattice quantum chromodynamics calculations essentially divide space-time into a four-dimensional grid. That allows researchers to examine what is called the strong force, one of the four fundamental forces of nature and the one that binds subatomic particles called quarks and gluons together into neutrons and protons at the core of atoms.

“If you make the simulations big enough, something like our universe should emerge,” Savage said. Then it would be a matter of looking for a “signature” in our universe that has an analog in the current small-scale simulations.

Savage and colleagues Silas Beane of the University of New Hampshire, who collaborated while at the UW’s Institute for Nuclear Theory, and Zohreh Davoudi, a UW physics graduate student, suggest that the signature could show up as a limitation in the energy of cosmic rays.

In a paper they have posted on arXiv, an online archive for preprints of scientific papers in a number of fields, including physics, they say that the highest-energy cosmic rays would not travel along the edges of the lattice in the model but would travel diagonally, and they would not interact equally in all directions as they otherwise would be expected to do.

“This is the first testable signature of such an idea,” Savage said.

If such a concept turned out to be reality, it would raise other possibilities as well. For example, Davoudi suggests that if our universe is a simulation, then those running it could be running other simulations as well, essentially creating other universes parallel to our own.

“Then the question is, ‘Can you communicate with those other universes if they are running on the same platform?’” she said.

http://www.washington.edu/news/2012/12/10/do-we-live-in-a-computer-simulation-uw-researchers-say-idea-can-be-tested/

 

Breakthrough in Augmented Reality Contact Lens

121205090931-large

 

The Centre of Microsystems Technology (CMST), imec’s associated laboratory at Ghent University (Belgium), has developed an innovative spherical curved LCD display, which can be embedded in contact lenses. The first step toward fully pixilated contact lens displays, this achievement has potential wide-spread applications in medical and cosmetic domains.

Unlike LED-based contact lens displays, which are limited to a few small pixels, imec’s innovative LCD-based technology permits the use of the entire display surface. By adapting the patterning process of the conductive layer, this technology enables applications with a broad range of pixel number and sizes, such as a one pixel, fully covered contact lens acting as adaptable sunglasses, or a highly pixilated contact lens display.

The first prototype presented December 5 contains a patterned dollar sign, depicting the many cartoons that feature people or figures with dollars in their eyes. It can only display rudimentary patterns, similar to an electronic pocket calculator. In the future, the researchers envision fully autonomous electronic contact lenses embedded with this display. These next-generation solutions could be used for medical purposes, for example to control the light transmission toward the retina in case of a damaged iris, or for cosmetic purposes such as an iris with a tunable color. In the future, the display could also function as a head-up display, superimposing an image onto the user’s normal view. However, there are still hurdles to overcome for broader consumer and civilian implementation.

“Normally, flexible displays using liquid crystal cells are not designed to be formed into a new shape, especially not a spherical one. Thus, the main challenge was to create a very thin, spherically curved substrate with active layers that could withstand the extreme molding processes,” said Jelle De Smet, the main researcher on the project. “Moreover, since we had to use very thin polymer films, their influence on the smoothness of the display had to be studied in detail. By using new kinds of conductive polymers and integrating them into a smooth spherical cell, we were able to fabricate a new LCD-based contact lens display.”

Video: http://www.youtube.com/watch?v=-btRUzoKYEA

http://www.sciencedaily.com/releases/2012/12/121205090931.htm

Scientists at Cornell create Terminator-like organic metamaterial that flows like liquid and remembers its shape

 

 

DNAletters

 

birdsnests

A bit reminiscent of the Terminator T-1000, a new material created by Cornell researchers is so soft that it can flow like a liquid and then, strangely, return to its original shape.

Rather than liquid metal, it is a hydrogel, a mesh of organic molecules with many small empty spaces that can absorb water like a sponge. It qualifies as a “metamaterial” with properties not found in nature and may be the first organic metamaterial with mechanical meta-properties.

Hydrogels have already been considered for use in drug delivery — the spaces can be filled with drugs that release slowly as the gel biodegrades — and as frameworks for tissue rebuilding. The ability to form a gel into a desired shape further expands the possibilities. For example, a drug-infused gel could be formed to exactly fit the space inside a wound.

Dan Luo, professor of biological and environmental engineering, and colleagues describe their creation in the Dec. 2 issue of the journal Nature Nanotechnology.

The new hydrogel is made of synthetic DNA. In addition to being the stuff genes are made of, DNA can serve as a building block for self-assembling materials. Single strands of DNA will lock onto other single stands that have complementary coding, like tiny organic Legos. By synthesizing DNA with carefully arranged complementary sections Luo’s research team previously created short stands that link into shapes such as crosses or Y’s, which in turn join at the ends to form meshlike structures to form the first successful all-DNA hydrogel. Trying a new approach, they mixed synthetic DNA with enzymes that cause DNA to self-replicate and to extend itself into long chains, to make a hydrogel without DNA linkages.

“During this process they entangle, and the entanglement produces a 3-D network,” Luo explained. But the result was not what they expected: The hydrogel they made flows like a liquid, but when placed in water returns to the shape of the container in which it was formed.

“This was not by design,” Luo said.

Examination under an electron microscope shows that the material is made up of a mass of tiny spherical “bird’s nests” of tangled DNA, about 1 micron (millionth of a meter) in diameter, further entangled to one another by longer DNA chains. It behaves something like a mass of rubber bands glued together: It has an inherent shape, but can be stretched and deformed.

Exactly how this works is “still being investigated,” the researchers said, but they theorize that the elastic forces holding the shape are so weak that a combination of surface tension and gravity overcomes them; the gel just sags into a loose blob. But when it is immersed in water, surface tension is nearly zero — there’s water inside and out — and buoyancy cancels gravity.

To demonstrate the effect, the researchers created hydrogels in molds shaped like the letters D, N and A. Poured out of the molds, the gels became amorphous liquids, but in water they morphed back into the letters. As a possible application, the team created a water-actuated switch. They made a short cylindrical gel infused with metal particles placed in an insulated tube between two electrical contacts. In liquid form the gel reaches both ends of the tube and forms a circuit. When water is added, the gel reverts to its shorter form that will not reach both ends. (The experiment is done with distilled water that does not conduct electricity.)

The DNA used in this work has a random sequence, and only occasional cross-linking was observed, Luo said. By designing the DNA to link in particular ways he hopes to be able to tune the properties of the new hydrogel.

The research has been partially supported by the U.S. Department of Agriculture and the Department of Defense.

http://www.news.cornell.edu/stories/Dec12/ShapeGel.html

Thanks to Dr. Rajadhyaksha for bringing this to the attention of the It’s Interesting community.

Future technology from Apple

applecomputer062906vig1

 

All the way back in February of this year, Apple’s iPhone business alone surpassed the size of Microsoft’s entire business, reaching nearly $25 billion in annual revenue versus Microsoft’s ~$20 billion.

Since February, Apple’s iPhone business has only grown, widening this gap.

Here’s the outdated chart from February:

iPhone vs Microsoft

Remarkable isn’t it?

Here’s what’s more remarkable yet: At this very moment, Apple is working on technology that, if successfully developed, will cannibalize and ultimately destroy that iPhone business.

We have two pieces of evidence.

The first is that Apple has established a pattern.

Unlike most companies, Apple has a remarkable ability to predict the kinds of gadgets that will undercut the gadgets it sells, and then build these new gadgets better than anyone else could.

The best example of this is the iPad, which is actively disrupting Apple’s own Mac business.

During Business Insider’s Ignition Conference last week, top Apple analyst Gene Munster of Piper Jaffray talked about Apple’s tendency to cannibalize its own businesses and predicted that it would continue to do so.

He speculated that Apple is working on consumer robotics, wearable computers, 3D printing, consumable computers, and automated technology.

He showed everyone this chart, which visualizes Apple’s pattern:

Munster on Apple

Here’s the other reason it’s safe to assume Apple is quietly working on the destruction of its most massive business, the iPhone.

Just like Google and Microsoft, Apple is working on computerized glasses. 

Computerized glasses, are, at the moment, the technology that is most likely to bring the smartphone era to an end.

They fit into an obvious pattern, where computers have been getting smaller and closer to our faces since their very beginning. 

First they were in big rooms, then they sat on desktops, then they sat on our laps, and now they’re in our palms. Next they’ll be on our faces. 

We have the rough schematics of Apple’s project.

They’ve been  publicly available on the US Patent Office’s website since this summer, when they were noticed by several Apple-watching websites.

In the patent filing, Apple calls the gadget  a “head-mounted display” or “HMD.”  

The filing is authored by Tony Fadell, designer of the iPod, and John Tang. Fadell is no longer at Apple, but Tang is.

Some highlights from the description:

  • An HMD is “a display device that a person wears on the head in order to have video information directly displayed in front of the eyes.”
  • “The optics are typically embedded in a helmet, glasses, or a visor, which a user can wear.”
  • “HMDs can be used to view a see-through image imposed upon a real world view, thereby creating what is typically referred to as an augmented reality.”
Apple says HMDs can be used…
  • To “display relevant tactical information, such as maps or thermal imaging data.”
  • To “provide stereoscopic views of CAD schematics, simulations or remote sensing applications.”
  • For “gaming and entertainment applications.”
A gadget that features applications for maps, games, and a million other uses? Sounds familiar.

Here’s an illustration from the patent filing:

Apple Patent

Read more: http://www.businessinsider.com/apple-is-quietly-working-to-destroy-the-iphone-2012-12#ixzz2E7XVXabt

DARPA project suggests a mix of man and machine may be the most efficient way to spot danger: the Cognitive Technology Threat Warning System

smart_sentryx519

 

Sentry duty is a tough assignment. Most of the time there’s nothing to see, and when a threat does pop up, it can be hard to spot. In some military studies, humans are shown to detect only 47 percent of visible dangers.

A project run by the Defense Advanced Research Projects Agency (DARPA) suggests that combining the abilities of human sentries with those of machine-vision systems could be a better way to identify danger. It also uses electroencephalography to identify spikes in brain activity that can correspond to subconscious recognition of an object.

An experimental system developed by DARPA sandwiches a human observer between layers of computer vision and has been shown to outperform either machines or humans used in isolation.

The so-called Cognitive Technology Threat Warning System consists of a wide-angle camera and radar, which collects imagery for humans to review on a screen, and a wearable electroencephalogram device that measures the reviewer’s brain activity. This allows the system to detect unconscious recognition of changes in a scene—called a P300 event.

In experiments, a participant was asked to review test footage shot at military test sites in the desert and rain forest. The system caught 91 percent of incidents (such as humans on foot or approaching vehicles) in the simulation. It also widened the field of view that could effectively be monitored. False alarms were raised only 0.2 percent of the time, down from 35 percent when a computer vision system was used on its own. When combined with radar, which detects things invisible to the naked eye, the accuracy of the system was close to 100 percent, DARPA says.

“The DARPA project is different from other ‘human-in-the-loop’ projects because it takes advantage of the human visual system without having the humans do any ‘work,’ ” says computer scientist Devi Parikh of the Toyota Technological Institute at Chicago. Parikh researches vision systems that combine human and machine expertise.

While electroencephalogram-measuring caps are commercially available for a few hundred dollars, Parikh warns that the technology is still in its infancy. Furthermore, she notes, the P300 signals may vary enough to require training or personalized processing, which could make it harder to scale up such a system for widespread use.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.technologyreview.com/news/507826/sentry-system-combines-a-human-brain-with-computer-vision/

Maryland installs cameras to take pictures of cameras

 

Many people find speed cameras frustrating, and some in the region are taking their rage out on the cameras themselves.

But now there’s a new solution: cameras to watch the cameras.

One is already in place, and Prince George’s County Police Maj. Robert V. Liberati hopes to have up to a dozen more before the end of the year.

“It’s not worth going to jail over a $40 ticket or an arson or destruction of property charge,” says Liberati.

Liberati is the Commander of the Automated Enforcement Section, which covers speed and red-light cameras.

Since April, six people have damaged speed cameras.

On April 6, someone pulled a gun out and shot a camera on the 11400 block of Duley Station Road near U.S. 301 in Upper Marlboro, Md.

Two weeks later, a speed camera was flipped over at 500 Harry S. Truman Drive, near Prince George’s Community College. Police believe several people were involved because of the weight of the camera itself.

Then in May, someone walked up to a camera on Brightseat Road near FedEx Field, cut off one of the four legs, and left.

“I guess that makes a statement, but we were able to just attach another leg,” says Liberati.

But when someone burned down a speed camera on Race Track Road near Bowie State College on July 3, Liberati and his colleagues began to rethink their strategy.

“It costs us $30,000 to $100,000 to replace a camera. That’s a significant loss in the program. Plus it also takes a camera off the street that operates and slows people down. So there’s a loss of safety for the community,” says Liberati

The Prince George’s County Police Department decided it needed to catch the vandals, or at least deter them.

“The roads are choked, there are lots of drivers on them. I think traffic itself is the cause of frustration (towards speed cameras). But, we have a duty to make the roads safe, even if takes a couple extra minutes to get to your destination. Unfortunately, that’s the Washington area, the place we live in,” says Liberati.

Speed cameras themselves can’t be used for security because under Maryland law speed cameras can only take pictures of speeding, says Liberati.

“We’ve taken the additional step of marking our cameras to let people know that there is surveillance.”

Liberati says the cameras aren’t a case of Big Brother nor a cash grab, police are simply trying to keep the public safe from reckless drivers.

http://www.wtop.com/41/3034979/New-cameras-to-watch-cameras-that-watch-you