Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length

 
Advanced search

1383723 Posts in 66169 Topics- by 58595 Members - Latest Member: ellisjames20061

October 26, 2020, 11:03:36 AM

Need hosting? Check out Digital Ocean
(more details in this thread)
TIGSource ForumsCommunityDevLogsThe Website is Down Adventure Game
Pages: [1]
Print
Author Topic: The Website is Down Adventure Game  (Read 1940 times)
brickshot
Level 0
**


0xAF


View Profile WWW
« on: February 23, 2020, 11:52:42 AM »

The Website is Down Adventure Game


What's this?
A comedy adventure game where you solve problems using retro technology and deal with the timeless shenanigans of human to computer to human interaction!

Background
My name is Josh Weinberg and a long time ago I made a web-series called The Website is Down (see Sales Guy vs Web Dude). This project started out as an “interactive episode” of the web-series and over time morphed into an adventure game. At the moment there is no “game” in this game however, it’s mostly just a big proof-of-concept!


Photographic realism!

The Prelude
Currently I am working on completing the prelude - the first short segment of the game which will also function as a playable demo. In it you find yourself locked in your parents’ basement in their sad attempt at some kind of intervention. “We’ll let you out when you get off drugs and get a job!” they yell through the door. “I’m not on drugs!” you yell back. “Well just find a job then!” Your mother is nice enough to slip a fruit-roll-up under the door. So caring! But how are you supposed to find a job locked in a basement? Too bad you don’t have a cell phone. It’s 1983, remember! You find your dad’s IBM XT computer and manage to put it together and boot it. You also discover a modem and your old list of BBS numbers. SKRREEEEEEEEEEEEEEEEEEEEEE AIYEEEOHHHNNCHEEEE. Now for the job part!

Bubble Cam
You can see that while the interface is 3D first-person, the environment is built with panoramic photography. I call this the “Bubble Cam” and it basically consists of showing the player a picture of my parent's actual basement from anywhere they can go. I will make a separate devblog post on how this is achieved.


Collisions with the background. Like when you walk into a wall drunk.

Ye Olde IBM 5160
Since I wanted the real feel of the 80's of course we need a big ol honkin' desktop computer to work with. Here you see the IBM PC 5160 "XT" which was the Cadillac model of the day. I actually found one of these (working!) and used it as a reference to make the models and the text mode simulation as accurate as possible.


Let's put this bad boy together.

It's called "Online"
Well now that we've got our computer assembled let's take this bad boy out on the freeway. I made up some terminal software and called it Brocomm I don't know why but this is your ticket to the world of dial-up networking. Have fun and watch out for those long-distance charges!


Luckily we have a hacked version!

Current Status or “What comes before Alpha?”
The game currently is in a very early tech-demo state. Some things you can see in the videos above:

  • The "Bubble Cam" panoramic 3D visuals.
  • Object occlusion and shadows on panoramic background.
  • Basic game stuff - pickup, drop, inspect.
  • Compose parts to build computer.
  • IBM XT text mode simulator.
  • Turn on computer - boot to BROCOMM.
  • See the BALZE hack page!

Some other things which are working but you can’t see are:

  • VT100 terminal emulation
  • Telnet and SSH to internet BBSes via in-game computer
  • QEmu integration - run virtual PC via in-game computer

And this will be done when?
Probably never! I've been working on this thing in my spare time for a few years now. Will it ever be done? No idea. As I mentioned, it's all a major work in progress and there is no real “game” part yet but I wanted to get the current state up on the blog so I can share what there is to share. I will post more about some of the above technical stuff later. Thanks for reading!

-josh
« Last Edit: February 23, 2020, 12:24:59 PM by brickshot » Logged
brickshot
Level 0
**


0xAF


View Profile WWW
« Reply #1 on: March 10, 2020, 05:08:18 AM »

The Basement

The Website is Down Adventure Game starts off with you locked in your parents' basement. For the sake of accuracy and for my own personal entertainment I decided to use my parents’ basement as the model. However, shockingly, "Weinberg family basement" turns up nothing on Turbo Squid! Neither does "fake stained-glass plastic candelabra" or "busted olive green fondue set circa 1975". Modeling out this stuff by hand would take a decade if I was good at modelling and I suck at modelling! So, I thought, "what if I just did the most obvious thing possible and showed the player a picture of the real basement from wherever they go?"


My parents’ basement.

In a nutshell

The idea was to capture a 360° panoramic image from every point in the basement and then display that image when the player is in the corresponding spot in the virtual world. Basically it’s like what would happen if Google Street View took a picture every quarter of an inch instead of every fifteen feet. As the player moves around they see a constantly updated view of what the room actually looks like from wherever they are and in whatever direction they are looking. Note that I’m not extracting texture data and mapping it onto 3D objects. The visuals are literally photographs of the actual location at a high enough density to allow smooth navigation.

Practically speaking there are of course far too many locations in the room to literally take a picture from each one. So I first restricted the player's movement to a set of line segments along a carefully chosen path. This movement restriction is justified by filling the extra space with static "stuff": boxes and general basement junk which you would not be able to walk through anyway. I then used a 2 inch interval for camera positions along each of the line segments. This gave enough images to allow for a decent frame interpolation (increasing the effective resolution to ¼” see below) without being too time consuming to record. Principal photography of the 187 camera positions took about ten hours in total.

I ran the resulting image sequence through a post-processing workflow to extract the 3D camera path, stabilize and interpolate the images and generate placeholder objects for collisions and shadows.

Here is a brief video showing the various parts of the workflow which i describe in detail below.




Details!

Capturing the Panoramas

Each panoramic photo in this project is composed of 4 individual full-frame spherical images taken at 90° angles to each other and then stitched together into an equirectangular projection. You can see the spherical images at the top and the corresponding equirectangular projection on the bottom of the following image. To record the images I used a Nikon D5500 DSLR with a full-frame fisheye lens and a Nodal Ninja panoramic tripod head. The "pano-head" allows for accurate rotations to be made around the lens' nodal point which is important to avoid parallax distortion in the output.


Fisheye to Equirectangular

I considered using one of the all-in-one panoramic cameras on the market but decided against it for a number of reasons. Compared to a similarly priced DSLR they have lower resolution and exposure range. The affordable ones also lack some critical features like aperture lock and RAW mode recording. Plus a full 360° field of view means that there is nowhere for the operator to hide! I would have had to leave the room for every exposure or take multiple shots anyway which just negates the point of having an all-in-one solution. With the pano-head I was able to generally just move around behind the camera for each shot which was not a bad workflow.

Since I was essentially shooting a stop-motion animation I had to deal with the changes in lighting caused by the movement of the sun. I addressed this by spreading the shooting out over a number of days, shooting for a few hours at the same time each day. I also used a window box covered with diffusion gels outside the window to even out the light and avoid direct sunlight shafts.

In order to reduce the footprint of the tripod and minimize the amount of post-processing needed to remove it I used a custom built wooden spreader. This brought the legs of the tripod into a narrow 12 inch radius. It also made it possible to simply push the tripod along the floor without picking it up and resetting it every time which would have been time consuming and error-prone.

Here look at some stuff.


Pano Head with Nikon

Ultraviolet Positioning Marks (

)

Find a handy friend to make you a triangle

Tripod on custom spreader

The most difficult part of the initial photography was getting an accurate position and orientation of the camera for each panorama. For the effect to work the images need to be taken from very evenly spaced positions along the paths and the camera orientation needs to stay consistent. The placement issue was solved by drawing out a path on the floor using ultraviolet ink which is invisible on camera but which can be illuminated with a blacklight for positioning. I marked off two inch intervals along the paths and was able to position the tripod very precisely using those marks.

The camera orientation was a more difficult problem. I attempted to use both a compass and a laser pointer attached to the body of the camera to keep it aligned to a consistent direction. This gave a rough orientation at best since even a small rotation of the tripod was very obvious when the images were viewed in succession. I eventually solved this problem very effectively using the stabilization feature of my match-moving software which I will discuss below.


Principal Photography. (full rez)

Once the images were captured I stitched them together using PTGuiPro to generate 187 4K (4096x2048) PNG images. These images were brought into Premiere as an image sequence for a rough alignment to prep them for the following steps.


Raw Image Sequence. (full rez video)
Interpolation
With a 2" separation between images there is a detectable stutter to the movement when played back. It feels like an old-timey movie. To remedy this I processed the sequence with an optical flow plugin for After Effects called Twixtor. Designed to generate slow-motion from full frame rate video it did an excellent job of creating new frames in between my original images.

Color Grading
The next post-production step was to take the interpolated images and apply a color grade in DaVinci Resolve. This allowed me to give the scene a more dramatic film-like look. The basic workflow is to find a white point and black point in the scene, do a color balance and then apply color modifications. I also was able to smooth out some of the erratic lighting caused by the stop-motion nature of the image capture.

Match Moving
Match moving software is magic. Using only a sequence of images as input it can calculate the actual 3D position of objects in the scene as well as the movement and orientation of the camera which filmed it. Modern match moving software works using analysis of general features in the images themselves and does not require any special trackers or locators. I used SynthEyes which has support for equilinear rectangular (panoramic) images built in.

Stabilization
First the images were imported into SynthEyes as a sequence and automatically stabilized. This stabilization step by itself was worth the price of admission. The resulting images were far better aligned and more stable than using any of my manual measurement approaches. After stabilization I used a keyframed rotoscope mask to designate areas of the scene which should not be used for feature tracking. These areas were mainly the shadow of the camera and some lens flares which moved across the view.


Interpolated, color graded and stabilized. (full rez)

Tracking
Tracking was run using SynthEyes to automatically find hundreds of trackable image features and to "solve" for their 3D locations. This also computed the camera movement as a path with an orientation at each point. Lastly a coordinate system was defined using three positions picked on the floor of the basement. This gives SynthEyes the information it needs to correctly orient the 3D scene and makes working with the data downstream more convenient.


Match Moving: Tracking features with SynthEyes. (full rez)

Compare the hand-plotted diagram I used to lay out the camera path on the floor of the basement with the 3D camera path SynthEyes found by analyzing the images:


Camera path as plotted by hand


Camera Path found by feature tracking in SynthEyes

At this point the camera path was exported into a text file which I processed with OpenSCAD to generate the movement barrier object mentioned above. The barrier was exported as an FBX object for use in the Unity scene later.

Making it Playable
I used Unity as my game engine. To assemble the prototype I first brought in the 3D barrier object and gave it a mesh collider to keep the player locked into the correct view path. An FPS character was created with a very narrow capsule collider which fit just into this path.

For the panoramic background I created a large sphere around the camera and assigned a material using a non-culling shader. The core of the system is a script which determines the correct image to display on the sphere for the current player position. This script takes the camera path from SynthEyes and based on the known number of locations in the total path it calculates the index of the image to show for the current camera position. This image is then copied to a render texture which is applied to the sphere material.


In Unity(full rez)

So now the player can walk around a virtual replication of my parents' basement. Exhilarating!
 
Next Steps
Next Steps! Get it? Oh man. So yeah unfortunately the thrill of just walking around the room with nothing else to do wears off pretty quickly (like my hilarious blog commentary I’m sure). In order to make this into a game it's necessary to have some interactive elements. In my next post I’ll get into how I made 3D objects interact with the fake background and how I got shadows on it. Also I can tell you a bedtime story about WEBM to DXT extraction! I know you can’t wait but you have to.

-josh
« Last Edit: March 10, 2020, 05:25:23 AM by brickshot » Logged
Ishi
Pixelhead
Level 10
******


coffee&coding


View Profile WWW
« Reply #2 on: March 10, 2020, 01:08:39 PM »

That whole photography process is astounding work. I kept grinning at the ingenuity of it all whilst reading Grin

Interpolation
With a 2" separation between images there is a detectable stutter to the movement when played back. It feels like an old-timey movie. To remedy this I processed the sequence with an optical flow plugin for After Effects called Twixtor. Designed to generate slow-motion from full frame rate video it did an excellent job of creating new frames in between my original images.

For the panoramic background I created a large sphere around the camera and assigned a material using a non-culling shader. The core of the system is a script which determines the correct image to display on the sphere for the current player position. This script takes the camera path from SynthEyes and based on the known number of locations in the total path it calculates the index of the image to show for the current camera position. This image is then copied to a render texture which is applied to the sphere material.

I'm curious, after the interpolation how many images do you then have? Are these images dynamically streamed into memory in-game (maybe keeping the images nearby the current position loaded)? It seems like there would be far too many images to keep them all loaded at all times.

Looking forward to see more about this game!
Logged

Thaumaturge
Level 10
*****



View Profile WWW
« Reply #3 on: March 11, 2020, 09:30:39 AM »

That photographic technique is really neat, and rather interesting--and the results look good indeed! Well done on that! :D
Logged


Traversal, exploration, puzzles, and combat in a heroic-fantasy setting
Website ~ Twitter
RealScaniX
Level 5
*****


ScaniX


View Profile WWW
« Reply #4 on: March 11, 2020, 09:43:36 AM »

That's indeed an awesome idea and looks great. I hope there will be a demo soon to check it out.
Those weird, but brilliant solutions are one of the best parts of gamedev for me. Smiley
Logged

oldblood
Level 9
****

...Not again.


View Profile
« Reply #5 on: March 11, 2020, 10:31:34 AM »

Very cool concept.
Logged

brickshot
Level 0
**


0xAF


View Profile WWW
« Reply #6 on: March 11, 2020, 12:41:01 PM »

That whole photography process is astounding work. I kept grinning at the ingenuity of it all whilst reading Grin

That photographic technique is really neat, and rather interesting--and the results look good indeed! Well done on that! :D

That's indeed an awesome idea and looks great.

Very cool concept.


Thank you all so much! It's really awesome to hear.


I'm curious, after the interpolation how many images do you then have? Are these images dynamically streamed into memory in-game (maybe keeping the images nearby the current position loaded)? It seems like there would be far too many images to keep them all loaded at all times.

So the short answer is that I am currently keeping them all in memory at the same time. There are 756 images which make up this little scene and I have rendered two separate exports of each of them - lores and hires. The lores is 2048x1024 and the hires 4096x2048. The lores images are 1 meg each and the hires 4 megs. So at runtime that is about 756m for lo and 3gb for high. I will let you pick which set you want to run with based on memory constraints. The hires of course looks better.

Hires - 4096x2048


Lores - 2048x1024


There are two main issues with the size of these images - first it limits the size of the scene I can have to how much can fit in memory and second it will make distribution a pain because if I have 30 scenes like this that's like 90gb of download which is really getting to be too much.

So to deal with the first problem I might have to move to a solution like you mentioned Ishi where I stream in from disk based on some criteria. However i'm pretty sure the player can move faster than I can load images from disk so there could be a lag. I initially tried encoding these all into a HAP video which is a GPU decodable video format. The idea was that I could just have the HAP video file in memory and decode the required frame on the fly. That really seemed like it was going to be a good solution because you get the benefit of compressed image data with high speed decompression but I never was able to get it to work without some kind of stutter. Part of the issue is that I need basically random access into the image stream because you may walk forward or backward and also because at the junction point in the room it has to jump from one point in the image sequence to another based on where you go. HAP (at least when I was messing with it) seemed really optimized for forward playback and not so much for this random access which i needed. So I just fell back on loading it all in memory and dealing with the consequences of that.

The second problem of download size I have dealt with by encoding the images into a WEBM video for distribution and downloading that when the scene is loaded and decompressing it directly into the individual DXT textures on disk. This makes the download size considerably smaller - 92mb for the lores and 224mb for the hires. Take a look if you like:

Lores webm
Hires webm

Thanks again for the comments and questions! Have a great day!

-josh
Logged
Quicksand-S
Level 10
*****


View Profile WWW
« Reply #7 on: March 12, 2020, 10:21:20 AM »

This is one of the coolest things I've seen in ages. Thanks for describing your process in detail. Great stuff.
Logged

My Old Game: We Want YOU - Join the Fight for Freedom

Twitter - Mostly comments on games, old and new.
brickshot
Level 0
**


0xAF


View Profile WWW
« Reply #8 on: June 06, 2020, 12:34:08 PM »

Interactivity

Hello there! Here is a small post about how I am handling collisions and occlusion for this scene which uses panoramic photography instead of geometry to render the environment.


As I mentioned in a previous post the challenge is that while the panoramic "bubble-cam" works well for walking around it's hard to have much of a game if you can't interact with anything in the scene. Traditionally of course you would just put some 3D objects into the world, assign them a rigidbbody and a collider and the game engine would take care of the rest. In this case however the geometry of the room does not exist. There is just a large sphere surrounding the camera projecting the image you would see from your position in the room. So there is nothing to collide with and no way to place an object 'behind' anything either. Here's what it looks like if we throw an object into this scene.


Back it up
Let's back up and try to remember what we're doing. The game engine traditionally renders an environment using 3D meshes which describe the objects in the scene. The GPU draws triangles and takes care of 'not showing objects when they go behind other objects' (aka The Visibility Problem (aka Occultation which is a much cooler name although nobody ever calls it that)) by using a depth buffer. As a nice side effect of having the geometry for the rendering you also have it (or a simplified version of it) to use as colliders for the physics engine. In our case however we don't have any geometry and that means we can't use a depth buffer to handle visibility and we can't use colliders to handle collisions.

OK so I did need to learn a little 3D modelling
The solution in this case was to make stand-in objects which act as colliders and give the rendering engine something to use for the depth buffer. Creating these objects was done with the help of the match-moving software SynthEyes which I’ve talked about a bit previously. I was able to export trackers from SynthEyes into Blender where I used them to make a pretty good stand-in model. Starting from the match-moving trackers was key. There’s no way to just eyeball these and have them come out right. In some places the objects still don’t match up perfectly but for the most part it works well and is believable.

Here are the stand-in objects created in Blender from the SynthEyes trackers:


Collisions
So now that I have some geometry let’s put it into the scene. Here is how it looks with colliders assigned to all the stand-in objects:

The collisions look good but as soon as the monitor rolls off the top of the box the illusion is lost because the monitor can’t go behind the box.

Occlusion... Occultation... Visibility
We want the background to show through the stand-in objects because we’re using a panoramic photo to show the environment but we can't just make them transparent because then they show interactive objects which are supposed to be blocked by them. So we need some weird hybrid shader which blocks the visibility of other objects in the scene but also transparently shows the background sphere which is behind everything when our object is in the foreground. I'm getting confused just writing this. Luckily there exists something like this already - it’s called a Shadow Catcher and is used for Augmented Reality applications. I am using ShadowDrawer by Keijiro Takahashi: https://github.com/keijiro/ShadowDrawer.

Words are hard
Maybe pictures will make this clear. First here is the scene with just the background. This is the panoramic image on a sphere - there is no other geometry.



Here I have inserted the stand-in collision geometry using a standard opaque shader. Collisions look good but we can’t see the background image.



Now I am using the Shadow Catcher shader on the stand-in objects with a checkerboard background so you can see which parts of the view are transparent. Notice that shadows are partially transparent and when the monitor goes behind the shadow catcher it is occluded.



Finally, here is the resulting composite view using the actual background imagery. Since the stand-in objects are in just the right positions the shadows fall believably across the background image and the occlusion happens at just the right areas. Our eyes are fooled.


Freeze frame showing the shadow composite onto the background.


    A - Opaque shader on stand-in objects.
    B - Shadow Catcher shader with checkerboard background.
    C - Panoramic background image.
    D - Composite image with B overlaid on C.


Here they are side by side in motion:


Finally

Times are a little stressful at the moment. Personally I would like to remind you that social media is too often just making a buck off of your stress and anxiety. Take a break, restrict or drop it from your life. You will feel better. If you are looking for a way to support change in your community here's a way: https://itch.io/b/520/bundle-for-racial-justice-and-equality. 742 games. All proceeds are donated.  Thanks for reading.

-josh

Logged
Ishi
Pixelhead
Level 10
******


coffee&coding


View Profile WWW
« Reply #9 on: June 06, 2020, 01:29:17 PM »

Great stuff, that works really well! It reminds me of Ocarina of Time which seems to use a similar effect for its pre-rendered backdrops, as seen in the Boundary Break episode.

https://youtu.be/HUgE9L7V4oY?t=105
Logged

brickshot
Level 0
**


0xAF


View Profile WWW
« Reply #10 on: June 07, 2020, 06:02:25 AM »

Thanks Ishi! I'll go check that out.
Logged
velocirection
Level 1
*


Your favorite pizza raptor game developer


View Profile
« Reply #11 on: June 09, 2020, 12:55:15 AM »

Absolutely crazy!!! Amazing technique to make a realistic place. And that occlusion post... Wow. Looks really good!!
Logged

https://twitter.com/velocirection
(WARNING- NSFW, 18+! Adults only!)
brickshot
Level 0
**


0xAF


View Profile WWW
« Reply #12 on: August 05, 2020, 09:29:41 AM »

The Information Dirt Road

Remember back in the before-times when you didn't even have an ISP because the 'I' didn't exist yet and instead you spent your after-hours monopolizing your family phoneline dialing up Bulletin Board Systems at 300 baud and loving it? It sounds so pedestrian and laughable now like the way my kids laugh when I try to explain paper money or stamps or "phoneline" for that matter. I have a great nostalgia for those days so I wanted to incorporate the world of BBSes into my retro adventure game. Enter BroComm.

BroComm


BroComm in action

If you do remember how this all worked you probably have fond recollections of your trusty modem dialer/terminal emulator. With it you could dial your modem and talk to the outside world via your phone line. You could transfer files, play games, read news. It was bananas! In this game I plan to have simulated BBSes which will be used to meet other characters and solve puzzles. However as a perk I also wanted to be able to connect the in-game computer to real live BBS systems which people still run today over telnet). This blog post will describe how I implemented this.

From the top

Here is a diagram of what is happening when you are connected to a remote host via BroComm:


The flow of a character

From the top to the bottom here are some implementation details:

Telnet

Since these modern day BBSes all run over telnet we need to make an outbound telnet connection. Using the Telnet RFC (written in 1983 by the master RFC craftsman, Jon Postel) I cobbled together a very simple telnet client which is smart enough to negotiate some basic options during connect. After that point we have a bi-directional character stream which can be used to send and receive bytes to and from the remote server.

VT100



Next requirement is to handle the VT100 control code protocol which is a standard for embedding command sequences in a character stream. This allows the host to tell the client things about cursor positioning, color, screen management, character attributes and much more. Here is a great reference manual on the VT100 terminal which is where the protocol originated (50 years ago!). I didn't implement this myself but instead used a very nice open source VT100 library. This library handles all the escape sequences and manages a virtual screen which is updated based on the input received. When anything changes I query it to determine what visible character is at each screen position.

There does exist another standard which is widely used on BBSes known as "extended ANSI graphics" (or something like that) which is an extension to the VT100 protocol (I think! Someone correct me here.) The VtNetCore library doesn't handle this but I may try to add support at some point.

Screen Memory Texture

Once we know what characters are in what position on the screen we now need to render it. We're simulating a text-only graphics device here (something like the IBM Monochrome Display Adapter). That device supports a display of 80x25 characters which are selected from a fixed character set that is permanently embedded in the ROM of the display hardware. So no fancy fonts for us! In order to convert our in-memory representation of the screen to something we can look at I created a custom shader which takes two input textures. The first holds the contents of the screen encoded as a very small (80x25 pixels) texture and the other holds an image of the font which is used to draw characters.

This is what the screen memory texture map might look like if you were able to see it:


Screen Memory Texture

It is a RGB24 texture which gives us 3 bytes per pixel, one for each of red, green and blue. A single byte of course can represent 256 values which not coincidentally is how many characters are in Code Page 437. So I can store an index into the code page in each position using a single channel of the image - in this case the red channel which is why it looks so red. I also make use of the green channel to hold flags for character attributes. The lowest order bit is for "bold", the second is for "underline", the third is for "emphasized" (which draws the character brighter), etc. The blue channel is bored because it's not used for anything. Maybe some day blue channel.

Here is the texture which is used to store the glyphs. It tells the shader what each character in the code page looks like:


Code Page 437 Font

I laid out this image 16x16 which is helpful because the low order nibble (4 bits) can be used to select the column and the high order nibble the row. For example if the pixel in the screen memory texture at position [0,0] is 0x58 (in hex), the character can be found in this map at row 5, column 8 (0 indexed) i.e. 'X'. This just makes lookup easier for the shader.

The shader

Did someone say shader? If you aren't familiar with what a shader is or how it works it's a little program which the GPU runs every time it wants to know what color a pixel is (very roughly). The shader is given a coordinate and asked "Hey what color is this bit here?" and the shader responds with "Ecru" or whatever the color is.

The shader I wrote to handle this screen display basically has the following algorithm:

  • Inputs are the coordinates being rendered and the two textures described above: screen memory texture and the font texture.
  • Convert the coordinate into a character position. I.e. (0,0) for top left, (79,24) bottom right.
  • Sample the memory screen texture at that character position to find the character which should be rendered and any attributes (i.e. 0x580100 would be a bold 'X').
  • Determine how far "into" this character we are - ie which pixel of the glyph are we rendering.
  • Sample the font texture to determine if the pixel at that position in the glyph is on or off.
  • Return green color if "on" and black if "off" (or the reverse if inverse or brigher green if emphasized, etc.)

Here are some screenshots of the resulting texture when this shader is applied:


Closeup showing emphasized, inverse text and some of the code page 437 "graphics" characters:


Extreme closeup showing some simulated CRT effects:


The 3D Model

Finally I needed a 3D model of the computer. I personally don't like paying for things so I decided to make this myself. I started by purchasing a working IBM 5160 computer off eBay and taking pictures of the whole thing. See how I managed to spend a lot of money in order to avoid spending a little money? Smart!


But now I own an actual PC from the early 80's! That's an asset son! And so is the 3D model I generated by spending God-knows how much time painstakingly recreating it in Blender. Once that process was complete I had a sort of similar looking 3D model of all the components:


Together at last

Then just put it together... Voila!


This simulated computer can "dial" out over telnet to BBSes (or anything else still running on telnet these days), you can use it to SSH in to your enterprise servers and reboot them, or you can even use it as a front-end for a virtual PC running on QEMU for even greater levels of realism. More details on that in another blog.

As always if you have any questions I'll be glad to answer them. Thanks for reading!
« Last Edit: August 05, 2020, 09:40:52 AM by brickshot » Logged
ThemsAllTook
Global Moderator
Level 10
******



View Profile WWW
« Reply #13 on: August 05, 2020, 10:12:19 AM »

Super cool stuff! I'll definitely be keeping my eyes on this one.
Logged

Rogod
Level 1
*



View Profile WWW
« Reply #14 on: August 06, 2020, 08:36:49 AM »

Wow this actually really surprises me that nobody has tried this before - it's kind of combining the tech used in games like Megarace and Myst but used in real life to make the ultimate experience. - Big fan Gomez
Logged

Ishi
Pixelhead
Level 10
******


coffee&coding


View Profile WWW
« Reply #15 on: August 06, 2020, 10:05:42 AM »

It's great seeing the photo of the computer on your green desk, and then seeing the in-game model sitting on the in-game version of the same desk.
Logged

Beastboy
Level 1
*

Happy birth day mom!


View Profile
« Reply #16 on: August 06, 2020, 11:17:50 PM »

Wow, the amount of polish in this project is amazing. Good job, you deserve appreciation for this quallity work
Logged
a-k-
Level 1
*


View Profile
« Reply #17 on: August 07, 2020, 01:14:34 AM »

This is such a wonderful project! I'm really curious how you approached the integration of QEMU.
Logged

brickshot
Level 0
**


0xAF


View Profile WWW
« Reply #18 on: August 07, 2020, 02:59:48 PM »


Hey thanks everyone for reading and for your comments!

Quote
I'm really curious how you approached the integration of QEMU.

Well yeah it was not very easy, unfortunately. QEmu provides a -curses display option which is great because curses can talk vt100 which I'm able to display. That is how I ended up doing it. The annoying/hard part is that QEmu has this in their source code:

Code:
  if (!isatty(1)) {
        fprintf(stderr, "We need a terminal output\n");
        exit(1);
  }

What that's doing is helpfully checking to make sure you have an interactive terminal before it will run in curses mode. The problem is I don't have an interactive terminal when starting qemu as a child process from within the game, I just have stdin and stdout which doesn't cut it. There's a utility called empty which handles this scenario by creating a PTY and wiring it up to the child process correctly. So currently that's what I'm using.

What would be great if QEmu provided shared memory access to the internal text screen or some kind of API for accessing the internals of the emulated system. That's probably not possible for some large number of good reasons. It also would be nice if they just took that check out (maybe I could convince them to turn it off with a flag idk).

Anyway here's what it looks like browsing the C: drive and running an old terminal program call crosstalk:


Have a great weekend everybody.

-josh
Logged
Pages: [1]
Print
Jump to:  

Theme orange-lt created by panic