Hello! This is the first in a series of blogs I intend to write about the development of the VR horror game I’m currently working on, Late Night Shop. I co-created this with Fred Fitzpatrick while I was learning to code last year. Since then we’ve convinced our employer, Total Monkery, to develop this title. Should be released sometime later this year on PC, VR and consoles.
Stuff I plan on covering in the near future:
- Making bad guys move when you’re not looking at them
- Making scenery and other typically inanimate stuff move about when you’re not looking at it
- How I designed the AI
- How we’ve adapted the game for VR (Google cardboard and Oculus)
Back during summer last year I was busy teaching myself to code. I’d had several unsuccessful games industry interviews and was getting a little dejected about the whole thing. Total Monkery (the company I now work at) were letting me hang about as a sort of work experience/knowledge-leech at the time, during which I got talking to Fred about a game idea of his. He wanted to make a game with scary mannequins that move about when you aren’t looking while you navigate through a clothing shop at night. Thought this was a pretty fun idea so I busily began learning the tools I’d need to use to make it happen.
At the time I was pretty unsure of my coding skills to say the least and was a little hesitant to take on the task. In doing so I learnt to use Unity, an awesome, super easy-to-use games development engine that runs on C# (which is like C++ without all the bullshit). Once I’d learnt the ropes I was pleasantly surprised to be able to produce a prototype for this game in an afternoon, albeit very poorly optimised and constructed entirely of cubes. Didn’t take me much longer to get some basic AI going.
Pretty nifty, right? Fuelled by the success of my first task, I offered to code the entire game using these new tools. Over the next few months I fleshed out the game mechanics, refined the detection technique and added all sorts of bells and whistles.
Technique 1: The Shotgun Approach
The detection system started off working like this:
- Cast rays from the middle of the game camera, covering the entire visible screen
- If any of the rays hit anything, tell it not to move
- If an object cannot be seen, have it run towards the player in an utterly terrifying manner
As a proof of concept, this method seemed to work fine. Being a lazy programmer, I didn’t bother to optimise this approach for an embarrassingly long time.
Problems with this approach:
- To achieve a reasonable accuracy, you need to cast literally thousands of rays to cover the entire screen. Doing anything thousands of times every frame is going to be performance intensive! (Who knew?!)
- Only actually that accurate at close-mid range because the rays diverge (see above image), sort of like firing a shotgun blast from your eyes.
- Casts a lot of redundant rays when you aren’t looking at anything important (like a wall, floor ceiling, inanimate cylinder, hat rack)
Technique 2: Tracked Sniping
The failing of this method became blatantly clear after running the game on my laptop and getting a glacial frame rate of about 15 frames per second with basic character models and scenery. Pretty poor. My new methodology was to refine my approach, i.e. to use a well aimed sniper rifle rather than a shotgun.
- Store a list of all the dudes you want to track
- Cast rays to each dude individually. Initially I just looked for the centre point of the object but have since extended it to look for a list of body parts.
- Only do step (2) if your baddie is in the field of view of the player camera
Here’s a vine of this system in action…
Sniping not only got me better performance (I was content with 100FPS on a laptop), but because of the target tracking it gave me much better results over a range of distances. This is still essentially the system I use now, but with a number of minor tweaks and optimisations. For instance I now use a list of dummy points (e.g. four points in hands, six points in the head) to increase accuracy.
It does have some drawbacks:
- Accuracy is still limited by the number of body parts you’re actively looking for
- Doesn’t work super well at very close range unless you have tonnes of dummy points (reduces performance)
- You need to see a chunk of the target for it to work.
Technique 3: Ask someone who knows what they’re doing
I’m fortunately now in a position where I can get my boss (who has a lot more experience) to figure out a better solution. Ideally we want a pixel-perfect solution for this, which I’ve been assured is doable using some kind of graphics card witchcraft that I don’t really understand. Ultimately, the aim is that you’ll see literally one pixel of one of these bad guys before it registers as being detected, give us a much better resolution at close range compared to the current system.
We haven’t implemented this yet but should get this up and running over the next month or so. If people are interested I’ll do another blog on this later on in development.
Thanks for reading! Next time I’ll go into how I’ve used the same principles to make scenery move when you can’t see it. We’ve been having a lot of fun with this mechanic and I can’t wait to show it in action. Check out the hashtag #LateNightShopGame for daily updates on the progress of the game.