Facial Recognition Screenshot

Facial Recognition is one of those things that can be extremely cool, but also one of those things that can be really creepy (aside from the privacy issues).

For Project Limitless my aim to introduce facial recognition and tracking is to enable the system to ‘see’ you, where you are, what you are doing and so on. It’s not really aimed to be taken out into the world, although it can.

For Mark 0.6 I plan to have streaming audio and video from any device to the core system ready. This will lay the groundwork for having everything in your home centered around a single ‘brain’ that can listen and talk to you wherever you are. However, for this demo video, I am just doing it from an Android device to test the idea.

The Demo

In the past two weeks I learned Android development with the simple goals in mind:

  1. Have the app work in VR (Google Cardboard)
  2. But have the display show the camera (Augmented Reality)
  3. and on top show tracked faces
  4. when recognized, show the name under the face

Easy enough right? Well kinda… Since this was my first Android app, it was the deep end, but once you get used to it, it’s kinda cool. After the two weeks I have a mashup app for you to see! This demo works on photos, but it actually works on live people as well, provided enough training photos (3+) are supplied.

Notice the display is cloned on the right, this allows it to work in a VR headset.

Until next time!

Virtual House Screenshot

In the last two weeks I’ve put in a lot of time to get the core of the platform together. I still have a very long way to go, but as it stands right now, the following modules are working:

  1. Speech Recognition
  2. Speech Synthesis (Text-To-Speech)
  3. Intent Recognition
  4. Natural Language Processing
  5. Plugin Infrastructure
  6. Data Storage API
  7. Multi-user support

Next I needed to test it out in the real world. Mostly for checking stability and if it can handle arbitrary input, what better way than to just throw it on the web in a live demo?

The Virtual House Demo

I decided to have people control a ‘virtual house’ with their voices (or text) using Chrome’s built-in Speech Recognition API. I wasn’t testing speech-to-text, so this worked fine. To get it all working I started the platform on an Ubuntu server, built a plugin to handle the state of every user’s house and connected the browser and the platform using Node.js and Socket.io. It works nicely, especially when you speak clearly.

There is still a lot to come, this demo is simply to show you where I’m heading with this project. Check out the demo video below of me using it, then head on over to projectlimitless.io and give it a try yourself!

Back in 2013 I played around with the idea of putting together my own J.A.R.V.I.S, but at that time I was also involved with my own start-up, Cirqls, and I never really got very far…

Fast-forward to 2016 and I have some spare time again (mainly between 10pm and 1am) as well as some daily tasks that I want automated. So I decided to revisit my old ideas and see what new technologies could support such a system. Turns out my old ideas were exactly that, old. After a couple of days thinking about the architecture I finally started working on it.

The main goal of what I call ‘Project Limitless‘ is to build a platform for naturally controlling all the technology around you.

For now, enjoy the introduction video, the rest will follow soon…