Google DeepMind releases Gemini AI pointer demos
Google DeepMind released experimental demos of an AI-enabled pointer that integrates with its Gemini model. The system supports screen actions through motion, speech, and shorthand gestures. Examples shown include updating a shopping list with courgette, aubergine and red onion, editing text in a code editor, and rescheduling a calendar event titled The Thinking Game at Asimov Cinema. The previews also featured a Windows 95-style browser window and overlaid cursor controls.
Really cool work from the team reimagining the mouse pointer to be intelligent! Try the prototype in @GoogleAIStudio it's pretty magical.
We’re reimagining a 50-year-old interface - the mouse pointer - with AI. 🖱️ These experimental demos show how people can intuitively direct Gemini on their screens using motion, speech, and natural shorthand to get things done 🧵
Pointing is a natural and intuitive interface for Gemini! Check these demos out.
We’re reimagining a 50-year-old interface - the mouse pointer - with AI. 🖱️ These experimental demos show how people can intuitively direct Gemini on their screens using motion, speech, and natural shorthand to get things done 🧵
cool!