Wallstreetcn
2023.06.06 06:50
I'm PortAI, I can summarize articles.

Is Apple's Spring Festival Gala too much to handle? Will the market be satisfied with buzzwords like "AI, MR, and brain-computer interfaces" all at once?

The consumer electronics market is about to change, although Apple's stock price still fell.

At the most highly anticipated WWDC conference in over a decade, Apple surprised the market with one more thing and unveiled its new computing platform, the mixed reality headset Vision Pro, following Mac, Apple, iPad, and Watch.

From the live demonstration at the conference and subsequent media reviews, Vision Pro, as the first hardware terminal developed entirely under Cook's leadership, lives up to expectations. It combines Apple's industrial design and engineering capabilities accumulated over more than a decade, making it the "most advanced consumer electronics device in the world" and demonstrating the potential to become the next Apple, as Cook had hoped.

However, despite the stunning debut of Vision Pro at the conference, Apple's stock price fell slightly overnight.

Renowned Apple analyst Guo Mingchi believes that this is mainly due to the high price of Vision Pro (up to $3,499, out of reach for the mass market) and the distant release date (first launched in the US early next year), which is not conducive to short-term investment sentiment. In the demonstration, there was no clear demonstration of the necessity of daily use.

In short, although the headset is cool and futuristic, it lacks the necessity of devices like phones and computers. Wall Street Journal previously mentioned that the market generally questions this. keyword=%E4%B8%83%E5%B9%B4%E7%A3%A8%E4%B8%80%E5%89%91), Why would consumers want to transfer things that can be easily done on traditional devices to a headset that costs up to $3,000 and has more complicated interactions?

Nevertheless, Apple and Cook are undoubtedly winners. Through this more than two-hour-long, content-rich "Tech Spring Festival", Apple not only demonstrated its unbeatable strength in software and hardware development, but also quietly and pragmatically flexed its muscles in new technology fields such as AI and brain-computer interfaces.

Vision Pro: What Metaverse? Don't Touch It!

From the demonstration at the press conference, Vision Pro is indeed full of science fiction.

In terms of interaction, this ski-like headset can be operated solely by eyes, voice, and gestures, without bulky external controllers or previously speculated smart bracelets.

After putting on the headset and unlocking it through Optic ID (iris ID), you can experience scenes similar to those in Iron Man movies:

What you see with your eyes is still the world around you, but the entire Vision operating system will appear in front of you, including hundreds of thousands of apps in the Apple ecosystem, as well as Microsoft Office software, Apple's built-in photo album, memo, etc. You only need to naturally gaze and pinch with two fingers to open the app you need.

In terms of entertainment experience, it crushes similar competitors. In the demonstration at the press conference, Apple demonstrated the "spatial computing" capability of Vision Pro. You only need to turn the knob on the headset to infinitely enlarge the movie screen in front of you, making the surrounding environment blend with the content you are watching. Imagine having a private cinema on a high-speed train, airplane, or in any room, just by opening Vision Pro. Apple also directly brought in Disney CEO Bob Iger at the press conference to announce that Disney content will land on Vision Pro.

In addition, Apple emphasized that Vision Pro is not a device that isolates users from the environment. When someone approaches you, the front glass of the headset will become transparent, making it easy for others to see your eyes and communicate with you; when you are in an immersive experience, the headset will appear opaque, reminding others that you cannot see them. Compared to the pure VR devices like Meta Oculus, Vision Pro seamlessly connects the real and virtual worlds. It is undoubtedly the best and most expensive headset today, far surpassing all VR devices anchored to the "metaverse" in terms of appearance.

Wall Street News previously mentioned that Cook really dislikes the concept of the metaverse. Throughout the entire press conference, Apple did not mention the "metaverse" once.

After the WWDC conference, some kind netizens made the following comparison chart:

Low-key and Pragmatic AI Strategy

Despite the newly launched mixed reality headset Vision Pro attracting almost all the attention, Apple did not talk much about its progress in AI technology, but "let the facts speak for themselves" and told people what AI technology can achieve in practice, so as to improve users' experience more realistically.

In fact, as a product company, Apple usually does not like to talk about "artificial intelligence" itself, but prefers the more academic term "machine learning", or only talks about the changes that technology landing can bring. This allows Apple to focus more on developing and showcasing the products themselves, as well as optimizing the user experience.

Specifically, at the conference, Apple announced the improvement of the improved Apple automatic correction function based on machine learning programs, which uses the same language conversion model as the technology that supports ChatGPT. Apple said that it will even learn from users' text and typing methods to optimize their experience.

In addition, Apple has also improved the user experience of AirPods Pro, which can automatically turn off noise reduction when users are in conversation. Although Apple did not define this improvement as a "machine learning" function, it is a difficult problem to solve, and the solution is based on AI models.

Another practical idea of Apple is to use the new digital persona function to scan users' faces and bodies in 3D. When users wear Vision Pro headsets to video conference with others, they can virtually reproduce their appearance. Apple also mentioned several other new features that use the company's progress in neural networks, such as identifying fields that need to be filled in PDF.

Does the headset device hide brain-computer interface technology?

On Tuesday, Sterling Crispin, a netizen who has worked in the AR/VR field for ten years and served as an Apple AR neural technology researcher, tweeted about the development process of Apple's AR neural technology.

According to Crispin, his work at Apple includes supporting the basic development of Vision Pro, mindfulness experience, and more challenging neural technology research. Overall, much of his work involves detecting users' physical and brain data in immersive experiences to assess their mental state. In mixed reality or virtual reality experiences, AI models attempt to predict whether users feel curious, mentally detached, afraid, attentive, recalling past experiences, or other cognitive states.

These can be inferred by measuring eye tracking, brain EEG activity, heart rate and rhythm, muscle activity, blood density, blood pressure, skin conductance, and other measurements, making predictive behavior possible.

According to Apple's patent description and Crispin's introduction, Apple's neural technology can predict user behavior and adjust virtual environments based on user states.

The coolest result is predicting what users will click before they actually click it. People's pupils often react before clicking because they expect something to happen after clicking.

This can create biofeedback by monitoring users' eye behavior and dynamically redesigning the user interface in real-time to create more expected pupil reactions. This is a rough "brain-computer interface" achieved through the eyes, without invasive brain surgery.

Other techniques for inferring cognitive states include quickly flashing visual or auditory stimuli to users in ways they may not consciously perceive, and then measuring their reactions.

Another patent details the use of machine learning and signals from the body and brain to predict how focused or relaxed users' attention is, or how effective their learning is, and then updating the virtual environment to enhance these states.

Imagine an adaptive immersive environment that helps you learn, work, or relax by changing what you see and hear in the background.