Idea for a Watch Dogs and Predator like tangiable vision

14 May 2023

Like others, I emjoyed playing the 1st Watch Dogs game, and it’s UI that could remotely hack or utilize it’s facial rec system. So, I started thinking about how such a thing could be made possible by using OSS projectors which already has facial rec tech and things like reverse image search to make my own vision assistant thingy.

To emulate Watch Dog’s person info thingy, I could use facial rec, and then mobile data to reverse image search then scrape any social media where they have a profile pic on to grab their IRL name, handle, and if they have Linkedin they job title. Then perhaps use H1B or Glassdoor data to get their estimated base salary/TC like in Watch Dogs. Asking ChatGPT to summarize their Twiiter, or other social media post for quriks like “Posts about Sushi” would be a way to gather a qurik statement.

I watched Predator 2018 and saw that Predator’s UI can translate text in real time, and I thought about Yandex’s or Google Translate service. For a device to process this, I thought getting a Google Glass devkit or using cardboard to hold my phone and creating an app to use my camerea for input. I know that IL has laws against using biometircs, but other states don’t have laws for facial rec AFAIK. Of course, I’m not even sure if such an app could handle lots of faces on a Chicago, or NYC sidewalk and render in full-time.

I need to learn more about Kotlin/Andriod app development, and see if there’s a DirectX like library that I can use to draw HUD/UI elements, bounding boxes etc. Also how clear the picture of someone’s faces needs to be for an accurate reverse image search to match. Of course I’d like thermial like predator vision, but proper thermal imaging costs too much.