Comunicato Stampa disponibile solo in lingua originale.
Today concludes F8, our annual two-day event where developers come together to explore the future of technology.
In the opening keynote, #facebook Chief Technology Officer Mike Schroepfer talked about our goal to develop technology that will help everyone build global community. To do that, we’re investing in a number of foundational technologies over the next 10 years, including connectivity, artificial intelligence, and virtual and augmented reality.
Schroepfer and other keynote speakers — Director of Connectivity Programs, Yael Maguire; Director of Applied Machine Learning, Joaquin Quiñonero Candela; Chief Scientist of Oculus Research, Michael Abrash; and Vice President of Engineering and Building 8 Regina Dugan — shared updates and visions for some of our long-term focus areas.
Rather than look for a one-size-fits-all connectivity solution, #facebook is investing in a building-block strategy — designing different technologies for different use cases, which are then used together to create flexible and extensible networks.
Today we highlighted new milestones in our efforts to reach people who are unconnected and increase capacity and performance for everyone else. Our team set three new records in wireless data transfer: 36 gigabits per second over 13 kilometers point-to-point using millimeter-wave (MMW) technology; 80 gigabits per second between those same points using our optical cross-link technology; and 16 gigabits per second from a location on the ground to a circling Cessna aircraft over 7 kilometers away using MMW. Additionally, our Terragraph system being tested with San Jose in the city’s downtown corridor has become the first city-scale mesh millimeter-wave system capable of delivering fiber-like multi-gigabits/s of performance and reliability.
We also announced Tether-tenna, a new kind of “insta-infrastructure” where a small helicopter tethered to a wire containing fiber and power can be deployed immediately to bring back connectivity in case of emergency.
AI is a powerful tool, and #facebook is leveraging it to build amazing visual experiences for people — including an AI-infused camera across #facebook, Instagram and Messenger. With the ability to run cutting edge AI and computer vision algorithms on the device, this camera can now understand your surroundings, recognize people, places and things. It can annotate and enhance images and video. The new Camera Effects Platform gives developers a way to build new tools for creative expression, and we shared a few demos of ideas that have come out of our research.
In a keynote presentation today, Applied Machine Learning Director Joaquin Quiñonero Candela talked about how AI has revolutionized the ability of computers to process and understand images and videos. It’s easy to forget that only five years ago, computers saw images as just a collection of numbers, with no particular meaning to them. Now computers can understand every single individual pixel of an image. These advancements enable new experiences, like adding digital objects and effects to a real world scene.
We believe AI belongs to everyone. That’s why in addition to opening the Camera Effects Platform, we announced that we are open sourcing Caffe2 — a framework to build and run AI algorithms on a phone — and building partnerships with Amazon, Intel, Microsoft, NVIDIA, Qualcomm, and others.
Facebook is investing in VR across mobile and PC hardware, software and content — from Oculus Rift and Gear VR to #facebook Spaces.
Today we introduced the newest designs for the Surround 360 technology that allows people to produce amazing high quality videos for VR. The new x24 and its smaller counterpart, x6, create some of the most immersive and engaging content ever shot for VR. The new camera technology lets you move around within the video scene and experience the content from different viewing angles. This means you can move your head around in the world and see it from different angles — what’s known as six degrees of freedom, or 6DoF — bringing the feeling of immersion and depth to a whole new level.
On day one of F8, Mark Zuckerberg talked about how the camera is the first augmented reality platform. Today Chief Scientist of Oculus Research Michael Abrash shared a vision for the path to full AR — where augmentation enhances your vision and hearing seamlessly while being light, comfortable, power-efficient and socially acceptable enough to accompany you everywhere.
He talked about the rise of virtual computing — which encompasses both virtual and augmented reality — as the next great wave after personal computing. Virtual computing is just starting to form, but it will give us the ability to transcend time and space to connect with one another in new ways.
In order to make virtual computing as much a part of everyday life as the smartphone is today, we’re going to need see-through augmented reality, which will likely be transparent glasses that can show virtual images overlaid on the real world.
The set of technologies needed to reach full AR doesn’t exist yet. This is a decade-long investment and it will require major advances in material #science, perception, graphics and many other areas. But once that’s achieved, AR has the potential to enhance almost every aspect of our lives, revolutionizing how we work, play and interact.
Building 8 is the product development and research team at #facebook focused on creating and shipping new, category-defining consumer products that are #social first, and that advance Facebook’s mission. Products from Building 8 will be powered by a breakthrough innovation engine modeled after DARPA and shipped at scale.
At F8 we announced two projects focused on silent speech communications.
We are working on a system that will let people type with their brains. Specifically, we have a goal of creating a silent speech system capable of typing 100 words per minute straight from your brain – that’s five times faster than you can type on a smartphone today. This isn’t about decoding your random thoughts. Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and choose to share only some of them. This is about decoding those words you’ve already decided to share by sending them to the speech center of your brain. It’s a way to communicate with the speed and flexibility of your voice and the privacy of text. We want to do this with non-invasive, wearable sensors that can be manufactured at scale.
We also have a project directed at allowing people to hear with their skin. We are building the hardware and software necessary to deliver language through the skin.
© Copyright 2019