Friday, November 3, 2017

Fwd: Abundance Insider: November 3 Edition




AbundanceInsiderLogo_Black.png

In this week's Abundance Insider: Robot citizens, teleoperating robots with VR, and the powerful next iteration of DeepMind's AlphaGo.

Cheers,
Peter, Marissa, Kelley, Greg, Sydney, AJ, Bri and Jason

P.S. Send any tips to our team by clicking hereto this link to subscribe, and send your friends and family to Abundance Insider.

Robot Bees Dive In and Out of Water Using Tiny Combustible Rockets

What it is: Flying bots (think tiny drones) have been around for several years now, and advances like swarm-based intelligence have augmented their capabilities significantly. Now, Harvard researchers have given bees a more mechanical ability. Bees have been able to fly, swim and dive into water, but until now the surface tension has made coming out a massive undertaking (the surface tension is roughly 10x the bot's weight). By generating oxyhydrogen from water with two electrolytic plates, Robert Wood and his team have effectively strapped on a tiny rocket which, when ignited, propels the bee out of the water.

Why it's important: Compared with AI-based systems, this advance might seem like a small deal, particularly when it can't be remotely controlled. But the ability to move between different media or conditions, and in particular by using that media as fuel, is a big step forward in versatility and adaptability. Consider use cases outside of water, like oil spills, and this should spark ideas and lead to new breakthroughs.  Share on Facebook

Spotted by Sydney Fulkerson / Written by Jason Goodwin 

Sophia Becomes World's First Robot Citizen

What it is: Saudi Arabia has just granted citizenship to Sophia, a humanoid robot developed by Hanson Robotics — the first time a country has granted a robot the same basic status and privileges as humans. The announcement took place onstage at the Future Investment Initiative in Riyadh, reports Fortune. "I am very honored and proud for this unique distinction," Sophia said to the audience during her presentation.

Why it's important: As robotics, artificial intelligence, sensors and other enabling exponential technologies develop, we'll start to see more humanoid robots that look, sound and even feel truly lifelike. How will humans respond as these next-gen robots become integrated throughout society — namely, in our communities, offices and homes?  Share on Facebook

Spotted by Sydney Fulkerson Written by Marissa Brassfield 

Generating Real-Time Video From Live Brain Scans

What it is: Using a form of deep-learning called Convolutional Neural Networks (CNN), Purdue engineering researchers have developed a system capable of decoding fMRI data streams to recognize images in near real time. After acquiring "11.5 hours of fMRI data from each of three women subjects watching 972 video clips," as KurzweilAI reports, the system was then able to accurately recognize faces, ships, birds, and other scene examples, and understand which areas of the visual cortex were associated with which type of images and information. Importantly, the researchers were also able to do "cross-subject encoding and decoding," which entails taking data from one person and successfully predicting images seen by another person.

Why it's important: While the images are not perfect, this is massive step forward in understanding how the brain functions and developing not only therapeutic systems but BCI interfaces as well. Importantly, although we've seen similar examples recently that rely on electrode stimulation and feedback, fMRI is relatively non-invasive, and progress in this space should proceed at a much more rapid rate.  Share on Facebook

Spotted by Marissa Brassfield  / Written by Jason Goodwin 

Teleoperating Robots with Virtual Reality

What it is: MIT's Artificial Intelligence Lab CSAIL has developed a VR system that allows users to teleoperate robots via an Oculus Rift or HTC Vive. By mapping the human's space to virtual space, and the virtual space to the robot's space, the team effectively enabled users to manipulate objects like stacking boxes at a 100 percent success rate versus 66 percent when operating robots via remote control. Additionally, by introducing stereo HD cameras and relying on the human's visual cortex to map the 3D space instead of a computationally heavy GPU, the team significantly reduced latency and motion sickness. They also successfully operated a robot from over 300 miles away (Boston to Washington, DC).

Why it's important: While the researchers tout the benefits of this system for enabling manufacturing workers to participate in the mobile work revolution, the use cases will expand to a much broader universe. What happens when you can place VR enabled robots in hazardous areas where humans can't go, or when motor control becomes fine-tuned to handle ever more granular tasks? Share on Facebook

Spotted by Marissa Brassfield / Written by Jason Goodwin

Memphis Meats Expects Meat From Cells Available in 2021

What it is: Memphis Meats, a company that uses self-producing animal cells to produce beef, chicken and duck without the animals, has announced the backers of its recent $17 million Series A round. Venture capital firm DFJ led the round, which also featured investments from Bill Gates, Richard Branson and Cargill. "They're the only one that convinced me they can get to a price point and a scale that would make a difference in the industry," said Steve Jurvetson of DFJ, who reportedly spent five years researching "clean meat" technologies and companies.

Why it's important: "Clean meat" technologies could dramatically reduce global greenhouse gas production — 14.5 percent of which comes from livestock. As Peter has written in previous blogs, bioprinting meat would enable us to "...feed the world with 99% less land, 96% less water, 96% fewer greenhouse gases and 45% less energy." Considering converging advances in gene editing, vertical farming, sensors and drones, we may soon be able to eradicate hunger and famine for good. Share on Facebook

Spotted by Marissa Brassfield / Written by Marissa Brassfield 

AlphaGo Zero Trains Itself to Be Most Powerful Go Player in the World

What it is: Google's DeepMind has announced the next iteration of AlphaGo — AlphaGo Zero — which is now arguably the best Go player in history. Unlike AlphaGo's algorithm, which was trained on human amateur and professional games, AlphaGo Zero taught itself to play by playing against itself. In just three days, AlphaGo Zero surpassed the abilities of AlphaGo Lee, which beat world champion Lee Sedol in 2015. At 40 days, AlphaGo Zero defeated AlphaGo 100 games to 0.

Why it's important: Faster computers are being used to build faster computers — and AlphaGo Zero proves how fast artificial intelligence is developing. This algorithm is unique in that it independently taught itself to play, first by testing random moves. What becomes possible when algorithms can train themselves, without the aid of data scientists and extensive training data?  Share on Facebook

Spotted by Marissa Brassfield  / Written by Marissa Brassfield

What is Abundance Insider?

This email is a briefing of the week's most compelling, abundance-enabling tech developments, curated by Marissa Brassfield in preparation for Abundance 360. Read more about A360 below.

Want more conversations like this?

At Abundance 360, Peter's 250-person executive mastermind, we teach the metatrends, implications and unfair advantages for entrepreneurs enabled by breakthroughs like those featured above. We're looking for CEOs and entrepreneurs who want to change the world. The program is highly selective. If you'd like to be considered, apply here

Know someone who would benefit from getting Abundance Insider? Send them to this link to sign up.

No comments:

Post a Comment