Archive for the ‘military’ Category


Study paves way for personnel such as drone operators to have electrical pulses sent into their brains to improve effectiveness in high pressure situations.

US military scientists have used electrical brain stimulators to enhance mental skills of staff, in research that aims to boost the performance of air crews, drone operators and others in the armed forces’ most demanding roles.

The successful tests of the devices pave the way for servicemen and women to be wired up at critical times of duty, so that electrical pulses can be beamed into their brains to improve their effectiveness in high pressure situations.

The brain stimulation kits use five electrodes to send weak electric currents through the skull and into specific parts of the cortex. Previous studies have found evidence that by helping neurons to fire, these minor brain zaps can boost cognitive ability.

The technology is seen as a safer alternative to prescription drugs, such as modafinil and ritalin, both of which have been used off-label as performance enhancing drugs in the armed forces.

But while electrical brain stimulation appears to have no harmful side effects, some experts say its long-term safety is unknown, and raise concerns about staff being forced to use the equipment if it is approved for military operations.

Others are worried about the broader implications of the science on the general workforce because of the advance of an unregulated technology.

In a new report, scientists at Wright-Patterson Air Force Base in Ohio describe how the performance of military personnel can slump soon after they start work if the demands of the job become too intense.

“Within the air force, various operations such as remotely piloted and manned aircraft operations require a human operator to monitor and respond to multiple events simultaneously over a long period of time,” they write. “With the monotonous nature of these tasks, the operator’s performance may decline shortly after their work shift commences.”

Advertisement

But in a series of experiments at the air force base, the researchers found that electrical brain stimulation can improve people’s multitasking skills and stave off the drop in performance that comes with information overload. Writing in the journal Frontiers in Human Neuroscience, they say that the technology, known as transcranial direct current stimulation (tDCS), has a “profound effect”.

For the study, the scientists had men and women at the base take a test developed by Nasa to assess multitasking skills. The test requires people to keep a crosshair inside a moving circle on a computer screen, while constantly monitoring and responding to three other tasks on the screen.

To investigate whether tDCS boosted people’s scores, half of the volunteers had a constant two milliamp current beamed into the brain for the 36-minute-long test. The other half formed a control group and had only 30 seconds of stimulation at the start of the test.

According to the report, the brain stimulation group started to perform better than the control group four minutes into the test. “The findings provide new evidence that tDCS has the ability to augment and enhance multitasking capability in a human operator,” the researchers write. Larger studies must now look at whether the improvement in performance is real and, if so, how long it lasts.

The tests are not the first to claim beneficial effects from electrical brain stimulation. Last year, researchers at the same US facility found that tDCS seemed to work better than caffeine at keeping military target analysts vigilant after long hours at the desk. Brain stimulation has also been tested for its potential to help soldiers spot snipers more quickly in VR training programmes.

Neil Levy, deputy director of the Oxford Centre for Neuroethics, said that compared with prescription drugs, electrical brain stimulation could actually be a safer way to boost the performance of those in the armed forces. “I have more serious worries about the extent to which participants can give informed consent, and whether they can opt out once it is approved for use,” he said. “Even for those jobs where attention is absolutely critical, you want to be very careful about making it compulsory, or there being a strong social pressure to use it, before we are really sure about its long-term safety.”

But while the devices may be safe in the hands of experts, the technology is freely available, because the sale of brain stimulation kits is unregulated. They can be bought on the internet or assembled from simple components, which raises a greater concern, according to Levy. Young people whose brains are still developing may be tempted to experiment with the devices, and try higher currents than those used in laboratories, he says. “If you use high currents you can damage the brain,” he says.

In 2014 another Oxford scientist, Roi Cohen Kadosh, warned that while brain stimulation could improve performance at some tasks, it made people worse at others. In light of the work, Kadosh urged people not to use brain stimulators at home.

If the technology is proved safe in the long run though, it could help those who need it most, said Levy. “It may have a levelling-up effect, because it is cheap and enhancers tend to benefit the people that perform less well,” he said.

https://www.theguardian.com/science/2016/nov/07/us-military-successfully-tests-electrical-brain-stimulation-to-enhance-staff-skills

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Advertisements

The technology is reminiscent of the deflector shields popularized in the world of ‘Star Trek.’

There are several technologies from the world of “Star Trek” that perhaps seem forever relegated to science fiction: transporters, warp drives, universal translators, etc. But if Boeing has its way, you won’t find deflector shields on that list. The multinational corporation has been granted a patent for a real life force field-like defense system that is reminiscent of the Trekkie tech most famous for keeping Enterprise safe from phaser blasts and photon torpedoes, reports CNN.

The patent, originally filed in 2012, calls the technology a “method and system for shockwave attenuation via electromagnetic arc.” Though not exactly the same thing as featured in “Star Trek,” the concept isn’t that far off from its fictional counterpart. Basically, the system is designed to create a shell of ionized air — a plasma field, essentially — between the shockwave of an oncoming blast and the object being protected.

According to the patent, it works “by heating a selected region of the first fluid medium rapidly to create a second, transient medium that intercepts the shockwave and attenuates its energy density before it reaches a protected asset.”

The protective arc of air can be superheated using a laser. In theory, such a plasma field should dissipate any shockwave that comes into contact with it, though its effectiveness has yet to be proven in practice. The device would also include sensors that can detect an oncoming blast before it makes impact, so that it wouldn’t have to be turned on at all times. It would only activate when needed, kind of like how a vehicle’s airbag is only triggered by an impact.

Boeing’s force field would not protect against shrapnel or flying projectiles — it is only designed to guard against a shockwave — so it isn’t an all-encompassing shield. But if it works, it will still offer improved protection against dangers commonly met on modern battlefields.

“Explosive devices are being used increasingly in asymmetric warfare to cause damage and destruction to equipment and loss of life. The majority of the damage caused by explosive devices results from shrapnel and shock waves,” reads the patent.

So the world of “Star Trek” may not be so far off after all. Maybe next, we’ll have subspace communications and Vulcan mind melds. The line between science and science fiction is becoming increasingly blurred indeed.

Read more: http://www.mnn.com/green-tech/research-innovations/stories/boeing-granted-patent-for-worlds-first-real-life-force-field#ixzz3VoQfqOyA

Thanks to Kebmodee and Da Brayn for bringing this to the attention of the It’s Interesting community.

A laser weapon made by Lockheed Martin can stop a small truck dead in its tracks from more than a mile (1.6 kilometers) away, the company announced this week.

The laser system, called ATHENA (short for Advanced Test High Energy Asset), is designed to protect military forces and key infrastructure, Lockheed Martin representatives said. During a recent field test, the laser managed to burn through and disable a small truck’s engine.

The truck was not driving normally; it was on a platform with the engine and drivetrain running, Lockheed Martin representatives said. The milestone is the highest power ever documented by a laser weapon of its type, according to the company. Lockheed is expected to conduct additional tests of ATHENA.

“Fiber-optic lasers are revolutionizing directed energy systems,” Keoki Jackson, Lockheed Martin’s chief technology officer, said in a statement. “This test represents the next step to providing lightweight and rugged laser-weapon systems for military aircraft, helicopters, ships and trucks.”

The ATHENA system could be a boon for the military because the laser can stop ground-based adversaries from interfering with operations long before they reach the front lines, company representatives said.

The laser weapon is based on a similar system called Area Defense Anti-Munitions (also developed by Lockheed Martin), which focuses on airborne threats. The 30-kilowatt Accelerated Laser Demonstration Initiative — the laser in ATHENA itself — was also made by Lockheed.

The recent test was the first time that such a laser was tested in the field, the company said. The Accelerated Laser Demonstration Initiative is a multifiber laser created through a technique called spectral beam combining. Essentially, the system takes multiple lasers and mashes them into one. Lockheed representatives said this beam “provides greater efficiency and lethality than multiple individual 10-kilowatt lasers used in other systems.”

Last year, Lockheed also highlighted laser defense capabilities in a demonstration test between two boats that were located about 1 mile apart. The vessels, described as “military-grade,” were stopped less than 30 seconds after the laser burned through the boat’s rubber hull.

http://www.livescience.com/50064-laser-weapon-stops-truck.html

Thanks to Da Brayn for bringing this to the attention of the It’s Interesting community.

defense-large

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

The United States military prohibits lethal fully autonomous robots. And semi-autonomous robots can’t “select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator,” even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive.

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundro told Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

“We’re talking about putting robots in more and more contexts in which we can’t predict what they’re going to do, what kind of situations they’ll encounter. So they need to do some kind of ethical reasoning in order to sort through various options,” said Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of the book Moral Machines: Teaching Robots Right From Wrong.

The sophistication of cutting-edge drones like British BAE Systems’s batwing-shaped Taranis and Northrop Grumman’s X-47B reveal more self-direction creeping into ever more heavily armed systems. The X-47B, Wallach said, is “enormous and it does an awful lot of things autonomously.”

But how do you code something as abstract as moral logic into a bunch of transistors? The vast openness of the problem is why the framework approach is important, says Wallach. Some types of morality are more basic, thus more code-able, than others.

“There’s operational morality, functional morality, and full moral agency,” Wallach said. “Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses… Functional morality is where the robot starts to move into situations where the operator can’t always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear.”

It’s a thick knot of questions to work through. But, Wallach says, with a high potential to transform the battlefield.

“One of the arguments for [moral] robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action,” he said.

Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,” Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent.

AI robotics expert Noel Sharkey is a detractor. He’s been highly critical of armed drones in general. and has argued that autonomous weapons systems cannot be trusted to conform to international law.

“I do not think that they will end up with a moral or ethical robot,” Sharkey told Defense One. “For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won’t really care. It will follow a human designer’s idea of ethics.”

“The simple example that has been given to the press about scheduling help for wounded soldiers is a good one. My concern would be if [the military] were to extend a system like this for lethal autonomous weapons – weapons where the decision to kill is delegated to a machine; that would be deeply troubling,” he said.

This week, Sharkey and Arkin are debating the issue of whether or not morality can be built into AI systems before the U.N. where they may find an audience very sympathetic to the idea that a moratorium should be placed on the further development of autonomous armed robots.

Christof Heyns, U.N. special rapporteur on extrajudicial, summary or arbitrary executions for the Office of the High Commissioner for Human Rights, is calling for a moratorium. “There is reason to believe that states will, inter alia, seek to use lethal autonomous robotics for targeted killing,” Heyns said in an April 2013 report to the U.N.

The Defense Department’s policy directive on lethal autonomy offers little reassurance here since the department can change it without congressional approval, at the discretion of the chairman of the Joint Chiefs of Staff and two undersecretaries of Defense. University of Denver scholar Heather Roff, in an op-ed for the Huffington Post, calls that a “disconcerting” lack of oversight and notes that “fielding of autonomous weapons then does not even raise to the level of the Secretary of Defense, let alone the president.”

If researchers can prove that robots can do moral math, even if in some limited form, they may be able to diffuse rising public anger and mistrust over armed unmanned vehicles. But it’s no small task.

“This is a significantly difficult problem and it’s not clear we have an answer to it,” said Wallach. “Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action. If we’re moving down this road of increasing autonomy in robotics, and that’s the same as Google cars as it is for military robots, we should begin now to do the research to how far can we get in ensuring the robot systems are safe and can make appropriate decisions in the context they operate.”

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.defenseone.com/technology/2014/05/now-military-going-build-robots-have-morals/84325/?oref=d-topstory

DARPA, the Defense Advanced Research Projects Agency, has developed new paddles that allow users to climb vertical walls like Spider-man. For the first time in history, a fully-grown person climbed a glass wall more than two stories in the air.

The Z-man program aimed at designing a new tool for soldiers to use when climbing walls. Traditionally, fighters in wartime have had to rely on ladders and ropes to overcome vertical surfaces. These are both noisy and bulky, making it difficult for warriors to climb quietly when needed.

“The gecko is one of the champion climbers in the Animal Kingdom, so it was natural for DARPA to look to it for inspiration in overcoming some of the maneuver challenges that U.S. forces face in urban environments,” Goodman said.

This challenge was one many species had already faced in the wild. Geckos, able to climb vertical surfaces, were an inspiration to the inventors.

“[N]ature had long since evolved the means to efficiently achieve it. The challenge to our performer team was to understand the biology and physics in play when geckos climb and then reverse-engineer those dynamics into an artificial system for use by humans,” Matt Goodman, DARPA program manager for the Z-Man program, told the press.

The lizard uses microscopic tendrils, called setae, that end with flat spatulae. This dual structure provides the creature with an extremely large surface area coming into contact with whatever it touches. This allows van der Waals forces, a magnetic attraction between atoms, to hold the lizard in place. This same technique is used for the paddles.

Draper Laboratory, headquartered in Cambridge, Massachusetts assisted the military technology developers in creating the devices. The business developed the unique microstructure material needed to make the design work.

The demonstration climb involved a climber weighing 218 pounds, in addition to a 50-pound load in one trial. He ascended and descended the vertical glass surface, using nothing but a pair of the paddles.

Warfare constantly advances in technology and strategies, but ropes and ladders – still needed to scale walls – have not significantly changed in thousands of years.

“‘Geckskin’ is one output of the Z-Man program. It is a synthetically-fabricated reversible adhesive inspired by the gecko’s ability to climb surfaces of various materials and roughness, including smooth surfaces like glass,” DARPA officials wrote on the Z-man Web site.

Advances in this bio-inpspired technology could have benefits beyond the battlefield. Materials similar to the the structure in the pad could be used as temporary adhesives for bandages, industrial and commercial products.

http://www.techtimes.com/articles/8287/20140610/gecko-inspired-darpa-paddles-become-spider-man.htm

Scientists with the United States Navy say they have successfully developed a way to convert seawater into jet fuel, calling it a potentially revolutionary advancement.

Researchers at the Naval Research Laboratory (NRL) developed technology to extract carbon dioxide from seawater while simultaneously producing hydrogen, and then converted the gasses into hydrocarbon liquid fuel. The system could potentially shave hours off the at-sea refueling process and eliminate time spent away from missions.

Currently, most of the Navy’s vessels rely entirely on oil-based fuel, with the exception of some aircraft carriers and submarines that use nuclear propulsion, reports the International Business Times. The ability to render fuel from seawater may change that.

“For us in the military, in the Navy, we have some pretty unusual and different kinds of challenges,” Vice Admiral Philip Cullom told Agence-France Presse. “We don’t necessarily go to a gas station to get our fuel. Our gas station comes to us in terms of an oiler, a replenishment ship. Developing a game-changing technology like this, seawater to fuel, really is something that reinvents a lot of the way we can do business when you think about logistics, readiness.”

The carbon and hydrogen gasses produced from the seawater extraction process are converted to liquids using metal catalytic converters in a reactor system. That liquid product contains hydrocarbon molecules with carbon levels suitable for replacing petroleum jet fuel, the NRL noted in a press release.

“Basically, we’ve treated energy like air, something that’s always there and that we don’t worry about too much. But the reality is that we do have to worry about it,” Cullom told AFP.

The NRL projects the new fueling system could be commercially viable in less than 10 years and could produce jet fuel that costs $3-6 dollars per gallon.

Forbes columnist Tim Worstall says the system could be great for the Navy, but he doubts it will be an economically feasible or energy-efficient alternative for those of us on land. “We need more energy to go into the process than we get out of it,” he wrote of the Navy’s method for converting seawater to fuel, adding later, “[A]s a general rule it’s not really all that useful. We want to produce energy, not just transform it with efficiency losses along the way.”

http://www.huffingtonpost.com/2014/04/09/seawater-to-fuel-navy-vessels-_n_5113822.html

Thanks to Ray Gaudetter for bringing this to the attention of the It’s Interesting community.

pot

by Jon Hamilton

Veterans who smoke marijuana to cope with post-traumatic stress disorder may be onto something. There’s growing evidence that pot can affect brain circuits involved in PTSD.

Experiments in animals show that tetrahydrocannabinol, the chemical that gives marijuana its feel-good qualities, acts on a system in the brain that is “critical for fear and anxiety modulation,” says Andrew Holmes, a researcher at the National Institute on Alcohol Abuse and Alcoholism. But he and other brain scientists caution that marijuana has serious drawbacks as a potential treatment for PTSD.

The use of marijuana for PTSD has gained national attention in the past few years as thousands of traumatized veterans who fought in Iraq and Afghanistan have asked the federal government to give them access to the drug. Also, Maine and a handful of other states have passed laws giving people with PTSD access to medical marijuana.

But there’s never been a rigorous scientific study to find out whether marijuana actually helps people with PTSD. So lawmakers and veterans groups have relied on anecdotes from people with the disorder and new research on how both pot and PTSD works in the brain.

An Overactive Fear System

When a typical person encounters something scary, the brain’s fear system goes into overdrive, says Dr. Kerry Ressler of Emory University. The heart pounds, muscles tighten. Then, once the danger is past, everything goes back to normal, he says.

But Ressler says that’s not what happens in the brain of someone with PTSD. “One way of thinking about PTSD is an overactivation of the fear system that can’t be inhibited, can’t be normally modulated,” he says.

For decades, researchers have suspected that marijuana might help people with PTSD by quieting an overactive fear system. But they didn’t understand how this might work until 2002, when scientists in Germany published a mouse study showing that the brain uses chemicals called cannabinoids to modulate the fear system, Ressler says.

There are two common sources of cannabinoids. One is the brain itself, which uses the chemicals to regulate a variety of brain cells. The other common source is Cannabis sativa, the marijuana plant.

So in recent years, researchers have done lots of experiments that involved treating traumatized mice with the active ingredient in pot, tetrahydrocannabinol (THC), Ressler says. And in general, he says, the mice who get THC look “less anxious, more calm, you know, many of the things that you might imagine.”

Problems with Pot

Unfortunately, THC’s effect on fear doesn’t seem to last, Ressler says, because prolonged exposure seems to make brain cells less sensitive to the chemical.

Another downside to using marijuana for PTSD is side effects, says Andrew Holmes at the National Institute on Alcohol Abuse and Alcoholism. “You may indeed get a reduction in anxiety,” Holmes says. “But you’re also going to get all of these unwanted effects,” including short-term memory loss, increased appetite and impaired motor skills.

So for several years now, Holmes and other scientists have been testing drugs that appear to work like marijuana, but with fewer drawbacks. Some of the most promising drugs amplify the effect of the brain’s own cannabinoids, which are called endocannabinoids, he says. “What’s encouraging about the effects of these endocannabinoid-acting drugs is that they may allow for long-term reductions in anxiety, in other words weeks if not months.”

The drugs work well in mice, Holmes says. But tests in people are just beginning and will take years to complete. In the meantime, researchers are learning more about how marijuana and THC affect the fear system in people.

At least one team has had success giving a single dose of THC to people during something called extinction therapy. The therapy is designed to teach the brain to stop reacting to something that previously triggered a fearful response.

The team’s study found that people who got THC during the therapy had “long-lasting reductions in anxiety, very similar to what we were seeing in our animal models,” Holmes says. So THC may be most useful when used for a short time in combination with other therapy, he says.

As studies continue to suggest that marijuana can help people with PTSD, it may be unrealistic to expect people with the disorder to wait for something better than marijuana and THC, Ressler says. “I’m a pragmatist,” he says. “I think if there are medications including drugs like marijuana that can be used in the right way, there’s an opportunity there, potentially.”

http://www.npr.org/blogs/health/2013/12/23/256610483/could-pot-help-veterans-with-ptsd-brain-scientists-say-maybe