Virtual House Screenshot

In the last two weeks I’ve put in a lot of time to get the core of the platform together. I still have a very long way to go, but as it stands right now, the following modules are working:

  1. Speech Recognition
  2. Speech Synthesis (Text-To-Speech)
  3. Intent Recognition
  4. Natural Language Processing
  5. Plugin Infrastructure
  6. Data Storage API
  7. Multi-user support

Next I needed to test it out in the real world. Mostly for checking stability and if it can handle arbitrary input, what better way than to just throw it on the web in a live demo?

The Virtual House Demo

I decided to have people control a ‘virtual house’ with their voices (or text) using Chrome’s built-in Speech Recognition API. I wasn’t testing speech-to-text, so this worked fine. To get it all working I started the platform on an Ubuntu server, built a plugin to handle the state of every user’s house and connected the browser and the platform using Node.js and Socket.io. It works nicely, especially when you speak clearly.

There is still a lot to come, this demo is simply to show you where I’m heading with this project. Check out the demo video below of me using it, then head on over to projectlimitless.io and give it a try yourself!

Back in 2013 I played around with the idea of putting together my own J.A.R.V.I.S, but at that time I was also involved with my own start-up, Cirqls, and I never really got very far…

Fast-forward to 2016 and I have some spare time again (mainly between 10pm and 1am) as well as some daily tasks that I want automated. So I decided to revisit my old ideas and see what new technologies could support such a system. Turns out my old ideas were exactly that, old. After a couple of days thinking about the architecture I finally started working on it.

The main goal of what I call ‘Project Limitless‘ is to build a platform for naturally controlling all the technology around you.

For now, enjoy the introduction video, the rest will follow soon…