Google tool teaches machine learning to six-year-olds
Teachable Machine 2.0 is an unintimidating no-code platform that eases kids into understanding machine learning
Google has launched the second iteration of its no-code Teachable Machine so that inexperienced users can take their bespoke machine learning (ML) models and apply them to projects such as classroom activities.
Teachable Machine 2.0 carries over the features from the original, allowing users to record images and video from a webcam and use them to train ML models for tasks like pattern recognition. Now, these same models can be taken and exported to websites, apps and physical machines.
Open source curriculums are making use of the tool to give children their first taste of ML, without the intimidating aspect of learning to code.
One such example is a programme run out of MIT's Media Lab by education researcher Blakeley H. Payne for six to 10-year olds. The children are invited to the lab where they use Teachable Machine 2.0, among other things, to get a wider knowledge base about technology and what it can do.
"Parents - especially of girls - often tell me their child is nervous to learn about AI because they have never coded before," said Payne. "I love using Teachable Machine in the classroom because it empowers these students to be designers of technology without the fear of 'I've never done this before.'"
The tool is entirely browser-based and all the data that's fed into it stays on the origin computer while the processing is done in the browser.
Teachable Machine can record from a computer's webcam and microphone and be trained to recognise images, sounds or poses. It can identify different people or objects and detect when they leave or return to the shot.
Other real-world use cases involve helping those with impaired speech use voice-powered computer products. For example, neurologic conditions that impair speech, such as motor neurone disease, can impede an individual's ability to interact with software. But, Teachable Machine can take audio, turn it into a spectrogram and be trained to correct the sound of speech that isn't produced normally.
Elsewhere, educators at New York University's interactive telecommunications program used the tool to create video games where characters could be controlled using hand gestures, thanks to the pose recognition feature.
Transform the operator experience with enhanced automation & analytics
Bring networking into the digital eraDownload now
Artificially intelligent data centres
How the C-Suite is embracing continuous change to drive valueDownload now
Deliver secure automated multicloud for containers with Red Hat and Juniper
Learn how to get started with the multicloud enabler from Red Hat and JuniperDownload now
Get the best out of your workforce
7 steps to unleashing their true potential with robotic process automationDownload now