[ad_1]
Ray-Ban’s Meta sensible glasses are getting smarter.
The camera-equipped spectacles, launched in December, can now use their multimodal AI smarts to retrieve info on common landmarks.
Meta CTO Andrew Bosworth, who shared the information on Threads, shared some examples of how this works in apply. For instance, asking the glasses (by the built-in mics) for “a cool reality” in regards to the Golden Gate Bridge, whereas taking a look at stated bridge, nets a end result during which the glasses inform you (by the built-in audio system) in regards to the bridge’s well-known Worldwide Orange coloration.
In one other instance, the Meta glasses share some information about San Francisco’s Coit Tower.
All of that is at the moment accessible for beta testers solely; these with out entry can join on the waitlist on Meta’s web site.
Bosworth shared a couple of different tidbits on the Meta glasses’ sensible options. The hands-free expertise has gotten an replace that lets customers share their newest Meta AI interplay on WhatsApp, Messenger, or ship it as a textual content message. You can too share the final photograph you took with a contact of yours. Lastly, podcast listeners will “quickly” have the ability to configure Meta AI readouts to be slower or sooner; the choice will probably be accessible underneath voice settings.
The Ray-Ban Meta sensible glasses (learn our overview right here) have gotten a bit brainier this December due to Meta’s AI wizardry, elevating them from a gadget that principally serves for taking pictures and movies to one thing you would possibly truly wish to use if you want an AI voice assistant’s assist. The landmark-describing function, whereas pretty slender in scope, is an ideal match for the glasses, and we hope to see extra such options added to the Meta glasses sooner or later.
Subjects
Augmented Actuality
Meta
[ad_2]