Archive for the ‘Military Industrial Complex’ Category

The technology is reminiscent of the deflector shields popularized in the world of ‘Star Trek.’

There are several technologies from the world of “Star Trek” that perhaps seem forever relegated to science fiction: transporters, warp drives, universal translators, etc. But if Boeing has its way, you won’t find deflector shields on that list. The multinational corporation has been granted a patent for a real life force field-like defense system that is reminiscent of the Trekkie tech most famous for keeping Enterprise safe from phaser blasts and photon torpedoes, reports CNN.

The patent, originally filed in 2012, calls the technology a “method and system for shockwave attenuation via electromagnetic arc.” Though not exactly the same thing as featured in “Star Trek,” the concept isn’t that far off from its fictional counterpart. Basically, the system is designed to create a shell of ionized air — a plasma field, essentially — between the shockwave of an oncoming blast and the object being protected.

According to the patent, it works “by heating a selected region of the first fluid medium rapidly to create a second, transient medium that intercepts the shockwave and attenuates its energy density before it reaches a protected asset.”

The protective arc of air can be superheated using a laser. In theory, such a plasma field should dissipate any shockwave that comes into contact with it, though its effectiveness has yet to be proven in practice. The device would also include sensors that can detect an oncoming blast before it makes impact, so that it wouldn’t have to be turned on at all times. It would only activate when needed, kind of like how a vehicle’s airbag is only triggered by an impact.

Boeing’s force field would not protect against shrapnel or flying projectiles — it is only designed to guard against a shockwave — so it isn’t an all-encompassing shield. But if it works, it will still offer improved protection against dangers commonly met on modern battlefields.

“Explosive devices are being used increasingly in asymmetric warfare to cause damage and destruction to equipment and loss of life. The majority of the damage caused by explosive devices results from shrapnel and shock waves,” reads the patent.

So the world of “Star Trek” may not be so far off after all. Maybe next, we’ll have subspace communications and Vulcan mind melds. The line between science and science fiction is becoming increasingly blurred indeed.

Read more: http://www.mnn.com/green-tech/research-innovations/stories/boeing-granted-patent-for-worlds-first-real-life-force-field#ixzz3VoQfqOyA

Thanks to Kebmodee and Da Brayn for bringing this to the attention of the It’s Interesting community.

Advertisements

A laser weapon made by Lockheed Martin can stop a small truck dead in its tracks from more than a mile (1.6 kilometers) away, the company announced this week.

The laser system, called ATHENA (short for Advanced Test High Energy Asset), is designed to protect military forces and key infrastructure, Lockheed Martin representatives said. During a recent field test, the laser managed to burn through and disable a small truck’s engine.

The truck was not driving normally; it was on a platform with the engine and drivetrain running, Lockheed Martin representatives said. The milestone is the highest power ever documented by a laser weapon of its type, according to the company. Lockheed is expected to conduct additional tests of ATHENA.

“Fiber-optic lasers are revolutionizing directed energy systems,” Keoki Jackson, Lockheed Martin’s chief technology officer, said in a statement. “This test represents the next step to providing lightweight and rugged laser-weapon systems for military aircraft, helicopters, ships and trucks.”

The ATHENA system could be a boon for the military because the laser can stop ground-based adversaries from interfering with operations long before they reach the front lines, company representatives said.

The laser weapon is based on a similar system called Area Defense Anti-Munitions (also developed by Lockheed Martin), which focuses on airborne threats. The 30-kilowatt Accelerated Laser Demonstration Initiative — the laser in ATHENA itself — was also made by Lockheed.

The recent test was the first time that such a laser was tested in the field, the company said. The Accelerated Laser Demonstration Initiative is a multifiber laser created through a technique called spectral beam combining. Essentially, the system takes multiple lasers and mashes them into one. Lockheed representatives said this beam “provides greater efficiency and lethality than multiple individual 10-kilowatt lasers used in other systems.”

Last year, Lockheed also highlighted laser defense capabilities in a demonstration test between two boats that were located about 1 mile apart. The vessels, described as “military-grade,” were stopped less than 30 seconds after the laser burned through the boat’s rubber hull.

http://www.livescience.com/50064-laser-weapon-stops-truck.html

Thanks to Da Brayn for bringing this to the attention of the It’s Interesting community.

defense-large

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems.

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.”

The United States military prohibits lethal fully autonomous robots. And semi-autonomous robots can’t “select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator,” even in the event that contact with the operator is cut off, according to a 2012 Department of Defense policy directive.

“Even if such systems aren’t armed, they may still be forced to make moral decisions,” Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. “While the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can’t take the idea of in-theater robots completely off the table,” Bello said.

Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions,” AI researcher Steven Omohundro told Defense One. “Human lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has always had to define ‘the rules of war’ and this technology is likely to increase the stakes for that.”

“We’re talking about putting robots in more and more contexts in which we can’t predict what they’re going to do, what kind of situations they’ll encounter. So they need to do some kind of ethical reasoning in order to sort through various options,” said Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of the book Moral Machines: Teaching Robots Right From Wrong.

The sophistication of cutting-edge drones like British BAE Systems’s batwing-shaped Taranis and Northrop Grumman’s X-47B reveal more self-direction creeping into ever more heavily armed systems. The X-47B, Wallach said, is “enormous and it does an awful lot of things autonomously.”

But how do you code something as abstract as moral logic into a bunch of transistors? The vast openness of the problem is why the framework approach is important, says Wallach. Some types of morality are more basic, thus more code-able, than others.

“There’s operational morality, functional morality, and full moral agency,” Wallach said. “Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses… Functional morality is where the robot starts to move into situations where the operator can’t always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear.”

It’s a thick knot of questions to work through. But, Wallach says, with a high potential to transform the battlefield.

“One of the arguments for [moral] robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action,” he said.

Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,” Arkin wrote in a 2007 research paper (PDF). Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent.

AI robotics expert Noel Sharkey is a detractor. He’s been highly critical of armed drones in general. and has argued that autonomous weapons systems cannot be trusted to conform to international law.

“I do not think that they will end up with a moral or ethical robot,” Sharkey told Defense One. “For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won’t really care. It will follow a human designer’s idea of ethics.”

“The simple example that has been given to the press about scheduling help for wounded soldiers is a good one. My concern would be if [the military] were to extend a system like this for lethal autonomous weapons – weapons where the decision to kill is delegated to a machine; that would be deeply troubling,” he said.

This week, Sharkey and Arkin are debating the issue of whether or not morality can be built into AI systems before the U.N. where they may find an audience very sympathetic to the idea that a moratorium should be placed on the further development of autonomous armed robots.

Christof Heyns, U.N. special rapporteur on extrajudicial, summary or arbitrary executions for the Office of the High Commissioner for Human Rights, is calling for a moratorium. “There is reason to believe that states will, inter alia, seek to use lethal autonomous robotics for targeted killing,” Heyns said in an April 2013 report to the U.N.

The Defense Department’s policy directive on lethal autonomy offers little reassurance here since the department can change it without congressional approval, at the discretion of the chairman of the Joint Chiefs of Staff and two undersecretaries of Defense. University of Denver scholar Heather Roff, in an op-ed for the Huffington Post, calls that a “disconcerting” lack of oversight and notes that “fielding of autonomous weapons then does not even raise to the level of the Secretary of Defense, let alone the president.”

If researchers can prove that robots can do moral math, even if in some limited form, they may be able to diffuse rising public anger and mistrust over armed unmanned vehicles. But it’s no small task.

“This is a significantly difficult problem and it’s not clear we have an answer to it,” said Wallach. “Robots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action. If we’re moving down this road of increasing autonomy in robotics, and that’s the same as Google cars as it is for military robots, we should begin now to do the research to how far can we get in ensuring the robot systems are safe and can make appropriate decisions in the context they operate.”

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.defenseone.com/technology/2014/05/now-military-going-build-robots-have-morals/84325/?oref=d-topstory

DARPA, the Defense Advanced Research Projects Agency, has developed new paddles that allow users to climb vertical walls like Spider-man. For the first time in history, a fully-grown person climbed a glass wall more than two stories in the air.

The Z-man program aimed at designing a new tool for soldiers to use when climbing walls. Traditionally, fighters in wartime have had to rely on ladders and ropes to overcome vertical surfaces. These are both noisy and bulky, making it difficult for warriors to climb quietly when needed.

“The gecko is one of the champion climbers in the Animal Kingdom, so it was natural for DARPA to look to it for inspiration in overcoming some of the maneuver challenges that U.S. forces face in urban environments,” Goodman said.

This challenge was one many species had already faced in the wild. Geckos, able to climb vertical surfaces, were an inspiration to the inventors.

“[N]ature had long since evolved the means to efficiently achieve it. The challenge to our performer team was to understand the biology and physics in play when geckos climb and then reverse-engineer those dynamics into an artificial system for use by humans,” Matt Goodman, DARPA program manager for the Z-Man program, told the press.

The lizard uses microscopic tendrils, called setae, that end with flat spatulae. This dual structure provides the creature with an extremely large surface area coming into contact with whatever it touches. This allows van der Waals forces, a magnetic attraction between atoms, to hold the lizard in place. This same technique is used for the paddles.

Draper Laboratory, headquartered in Cambridge, Massachusetts assisted the military technology developers in creating the devices. The business developed the unique microstructure material needed to make the design work.

The demonstration climb involved a climber weighing 218 pounds, in addition to a 50-pound load in one trial. He ascended and descended the vertical glass surface, using nothing but a pair of the paddles.

Warfare constantly advances in technology and strategies, but ropes and ladders – still needed to scale walls – have not significantly changed in thousands of years.

“‘Geckskin’ is one output of the Z-Man program. It is a synthetically-fabricated reversible adhesive inspired by the gecko’s ability to climb surfaces of various materials and roughness, including smooth surfaces like glass,” DARPA officials wrote on the Z-man Web site.

Advances in this bio-inpspired technology could have benefits beyond the battlefield. Materials similar to the the structure in the pad could be used as temporary adhesives for bandages, industrial and commercial products.

http://www.techtimes.com/articles/8287/20140610/gecko-inspired-darpa-paddles-become-spider-man.htm

Japanese artist Isao Hashimoto has created a beautiful, undeniably scary time-lapse map of the 2053 nuclear explosions which have taken place between 1945 and 1998, beginning with the Manhattan Project’s “Trinity” test near Los Alamos and concluding with Pakistan’s nuclear tests in May of 1998. This leaves out North Korea’s two alleged nuclear tests in this past decade (the legitimacy of both of which is not 100% clear).

Each nation gets a blip and a flashing dot on the map whenever they detonate a nuclear weapon, with a running tally kept on the top and bottom bars of the screen. Hashimoto, who began the project in 2003, says that he created it with the goal of showing”the fear and folly of nuclear weapons.” It starts really slow — if you want to see real action, skip ahead to 1962 or so — but the buildup becomes overwhelming.

http://memolition.com/2013/10/16/time-lapse-map-of-every-nuclear-explosion-ever-on-earth/

Thanks to Jody Troupe for bringing this to the attention of the It’s Interesting community.

iron-man

Researchers at the Massachusetts Institute of Technology, the U.S. Army Research, Development and Engineering Command (RDECOM) and other groups from business and academia are joining forces to create a Tactical Assault Light Operator Suit, or TALOS, that “promises to provide superhuman strength with greater ballistic protection,” according to a statement released by the U.S. Army.

The most amazing features of the suit include integrated 360-degree cameras not unlike Google Glass (but with night vision capabilities), sensors that can detect injuries and apply a wound-sealing foam, and — get ready for this — a bulletproof exoskeleton made of magnetorheological fluids that can change from liquid to solid in milliseconds when a magnetic field or electrical current is applied.

If it all reminds you of the liquid-metal shapeshifter T-1000 from “Terminator” or some other sci-fi character, you’re not alone. “It sounds exactly like ‘Iron Man,'” Gareth McKinley, a professor at MIT, told NPR. “The other kind of things that you see in the movies I think that would be more realistic at the moment would be the kind of external suit that Sigourney Weaver wears in ‘Aliens,’ where it’s a large robot that amplifies the motions and lifting capability of a human.”

The developers from RDECOM, MIT and elsewhere are researching “every aspect making up this combat armor suit,” Lt. Col. Karl Borjes, a RDECOM science adviser, said in the U.S. Army statement. “It’s advanced armor. It’s communications, antennas. It’s cognitive performance. It’s sensors, miniature-type circuits. That’s all going to fit in here, too.”

Not everyone, however, is enamored with the super-advanced gizmos being proposed for the soldiers of tomorrow. “My sense is it is an up-armored Pinocchio,” Scott Neil, a retired special forces master sergeant and Silver Star recipient, told the Tampa Tribune. “Now the commander can shove a monkey in a suit and ask us to survive a machine gun, IED [improvised explosive device] and poor intelligence all on the same objective. And when you die in it, as it melds to your body, you can bury them in it.”

Even believers in the TALOS suit acknowledge its limitations. “The acronym TALOS was chosen deliberately,” McKinley said. “It’s the name of the bronze armored giant from ‘Jason and the Argonauts.’ Like all good superheroes, Talos has one weakness. For the Army’s TALOS, the weak spot is either the need to carry around a heavy pump for a hydraulic system, or lots of heavy batteries. We don’t have Iron Man’s power source yet.”

For would-be sci-fi superheroes who are ready for their very own TALOS, the wait may prove excruciating: Though various components of the suit are currently in development, the Army hopes to have a prototype ready next year, and an advanced model won’t be developed until at least two years after that.

http://www.livescience.com/40325-army-iron-man-suit-talos.html

f16b

f16

It said that one of the Lockheed Martin F-16 made a first flight with an empty cockpit last week.

Two US Air Force pilots controlled the plane from the ground as it flew from a Florida base to the Gulf of Mexico.

Boeing suggested that the innovation could ultimately be used to help train pilots, providing an adversary they could practise firing on.

The jet – which had previously sat mothballed at an Arizona site for 15 years – flew at an altitude of 40,000ft (12.2km) and a speed of Mach 1.47 (1,119mph/1,800km/h).

It carried out a series of manoeuvres including a barrel roll and a “split S” – a move in which the aircraft turns upside down before making a half loop so that it flies the right-way-up in the opposite direction. This can be used in combat to evade attack.

Boeing said the unmanned F16 was followed by two chase planes to ensure it stayed in sight, and also contained equipment that would have allowed it to self-destruct if necessary.

The firm added that the flight attained 7Gs of acceleration but was capable of carrying out manoeuvres at 9Gs – something that might cause physical problems for a pilot.

“It flew great, everything worked great, [it] made a beautiful landing – probably one of the best landings I’ve ever seen,” said Paul Cejas, the project’s chief engineer.

Lt Col Ryan Inman, Commander of the US Air Force’s 82nd Aerial Targets Squadron, also had praise for how the test had gone.

“It was a little different to see it without anyone in it, but it was a great flight all the way around,” he said.

Boeing said that it had a total of six modified F-16s, which have been renamed QF-16s, and that the US military now planned to use some of them in live fire tests.

However, a spokesman for the Campaign to Stop Killer Robots warned of the temptation to use them in warfare.

“I’m very concerned these could be used to target people on the ground,” said Prof Noel Sharkey.

“I’m particularly worried about the high speed at which they can travel because they might not be able to distinguish their targets very clearly.

“There is every reason to believe that these so-called ‘targets’ could become a test bed for drone warfare, moving us closer and closer to automated killing.”

This is not the first time a jet has been retrofitted to fly without a pilot inside. The US Air Force has previously used adapted F4- Phantoms for target practice.

http://www.bbc.co.uk/news/technology-24231077

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.