Amazon ECS (Elastic Compute Cloud) Instance
This is a server running another free year of Amazon EC2 cloud computing, where cloud just means virtual machine on some amazon server farm
├── - Amazon ECS
│   └──
└── - Dell PowerEdge R210 II running gentoo 4.19.27-gentoo-r1 (Frontier Fios 50mbps up/down)
    ├── - Garage Camera (view/view, click HTTP JPEG to avoid VLC Install)
    └── - Street Camera
Random Image:
1 / 44
Random Image
Quake3 bsp rendered with laplacian edge detection. I think it looks cool... It is essentially just a 3x3 kernel filter, think something like:
 0 -1  0    -1 -1 -1
-1  4 -1 or -1  8 -1
 0 -1  0    -1 -1 -1

 I forget which, it was a *long* time ago, think it was in the "Orange Book" OpenGL Shading Language 2nd ed
2 / 44
Random Image
Random screenshot I thought looked cool, wavefront obj model rendered with phong lighting and normal maps, no
textures. (yes, the normal of the last row of the bezier curved archway is pointing the wrong direction. It's fixed now.)
3 / 44
Random Image
Parallax Mapping with Offset Limiting, needs work, my tangent space matrix is probably not quite right, I'll implement
self shadowing via Tatarchuk's method when I have time.
4 / 44
Random Image
AC to DC converter... 5v dc out with (hopefully) minimal ripple given constraints.
5 / 44
Random Image
Common Source Mosfet Amplifier operating in saturation region. I lost the values for the resistors so it isnt doing the
50V/V amplification it should be doing, but it works. (red is small signal input, green is output voltage)
6 / 44
Random Image
Common Emitter BJT amplifier that is amplifying like it is suppose to. The two extra bjt's on the bottom are a current mirror.
7 / 44
Random Image
Just in case you ever needed an ARM assembly code example. This works in codewarrior, be careful as it is picky about whitespace.
8 / 44
Random Image
That's a one spicy metaball. (it looks better in motion, download the example) Made this the other day after reading
about marching cubes for the millionth time. Finally decided to write it up. Surprisingly easy to code given the tables
that *everyone* uses:
Download my example code here:
9 / 44
Random Image
Decided to use md5 files as animated models in my engine, here it is right after crudely putting it in from a side project.
The guy looks better animated. I still need to ensure vertex normals are correct and generate tangents. Used this site's
( example, but rewrote all code to my liking. Videos: part1 part2
10 / 44
Random Image
This isnt all that impressive, but it is the correct output of some line drawing code eg: draw_line(x1,y1,x2,y2). Kind of annoying to get all the special cases correct. Lots of integer slope y / x code, unless it's better to do x / y, symmetry says you can use the same code for upper right as rest of slopes, so negate axis and reuse code etc
11 / 44
Random Image
Software rendered cube with affine texture mapping and barycentric triangle rasterization
12 / 44
Random Image
I'm just going to explain bitcoin because I think it's cool.
Bitcoin is a decentralized digital currency that is completely anonymous. You can send and receive payments online without a middleman. Sort of like paypal without the paypal if that makes any sense.
How do you use it?
  • Well first you need a wallet, which is a private/public key pair used to idenitfy your account on the network. (think of it as a bank account) You create a wallet using software that you can download from, look for bitcoin-qt.
  • So now you need bitcoins
  • You can get them by the traditional means of selling goods or services and accepting payments to your public address which is just a string of random numbers and letters. As an example my bitcoin wallet public address is: 18EHKKRn31FjveaukvRuUv7wnyVrcei6dm (feel free to give me money)
  • Another easy way would be to exchange cash for bitcoins at an online exchange like you would do to convert any one currency to another. (Which is also how you would convert bitcoins back into dollars)
  • And the last way is by mining. (Which is why there are any bitcoins to buy to begin with)
Bitcoin mining is the process of creating SHA-256 hashes of some data until you get a hash that is less than a certain number.
  • The data is called a block, which is composed of recent transactions, and that certain number is set by the network to control the growth rate of bitcoins such that no more than 21 million are created in total over the next hundred years.
  • The first person to create the hash gets 50 bitcoins.
  • Every four years this value halves Eg: You only get 25 bitcoins starting Jan 1st of 2013.
  • Most people hash in groups and split the 50 bitcoins as it takes a long time to find the magic hash that is less than the given number.
  • When you finish with one block, you add it to the blockchain, which is where you keep all the old blocks.
  • This block chain is shared peer to peer style with every mining node on the network and has every transaction that has ever occurred stored in it . (This means that at any point in time you can check how much money a certain public key wallet has in it)
13 / 44
Random Image
Mandelbrot set generator because why not. Pretty much is a set of random numbers generated on the complex plane that look pretty and have infinite edge detail.
14 / 44
Random Image
Simple ray tracer based on codermind example, renders three spheres very slowly
15 / 44
Random Image
Was having an issue adding omnidirectional shadowmaps to my engine so I needed to make a basic shadow mapping demo to iron out issues, here is the demo:, used as an example.
16 / 44
Small demo of multiplayer functionality on a local network, seems to work pretty nicely and is
somewhat playable, entities arent shaking anymore (probably was some floating point rounding
issue), fourth player is a bit laggy, but likely because I'm running all four on the same machine.
Does huffman compression of packets, still need to delta compress
17 / 44
Random Image
Show names over entities, kind of complicated as I need to transform point (0.0, 0.0, 0.0) from object to world, world to eye, divide by w, scale by 0.5, add 0.5, to get to normalized device coordinates
18 / 44
Random Image
Shadowmapping early screenshot in my engine, rendering every light in left, right, up, down, forward, back, directions (cubemap) and using a FBO to save depth / Z buffer to a texture and finally doing the "real render" in camera space using the shadow matrix and depthmap to generate shadows
19 / 44
Random Image
Implemented portal cameras and mirrors, just an FBO that renders the scene from the misc_portal_surface entity and uses it as a texture, although generating texture coordinates was a bit of trick
20 / 44
Random Image
Ported quake 1 software renderer to windows, very cool digestable example of software rendering, wanted to increase the resolution but was having issues with textures segfaulting when going past the original 320x200 mode 13h resolution, texture filtering is a nice thing we dont think about usually nowadays,, Dos and Windows binary
21 / 44
Random Image, stanford bunny is kind of ugly... but hey it works. The idea is you dynamically generate 3d
geometry from the silhouette of an object from the perspective of the light by extruding it. Then you use that
shadow volume and the stencil buffer to create the shadow stencil through clever incrementing and decrementing
of front and back face depth test results. Used Jack Hoxley's tutorial as an example (always had a hard time with
winding order of the extruded sides) Already added it to my engine along with shadow mapping, the performance
is noticably higher than shadow mapping, will add screenshots later -- I threw lucy and a couple of teapots in the
zip file if you want to change the model
22 / 44
Random Image
Bloom effect, pretty much you take your scene, mask out what you want to appear bright (in this case it's just an average color value above a threshold) and make the rest black. Then you blur the bejesus out of it, (usually gaussian blur) and add it back to the original image. High Dynamic Range rendering, on the other hand, uses more pixel precision (think float vs double) and "tone maps" it back to regular precision 8 bits per pixel (think of it like changing the exposure on a camera)
23 / 44
Random Image
Screenshot of stencil shadows in engine, performance is pretty good and I prefer the hard high resolution shadow vs the pixelated soft shadows you get with shadow mapping -- still need to apply it to world geometry, only on items currently
24 / 44
Random Image
Screenshot of screen space ambient occulusion in engine, it's a subtle effect that adds shadows to ambient lighting based on the depth map variations near a pixel, I cranked up the settings to better show whats going on versus the right side with it off -- people often view this as a 10fps performance hit that does nothing, but performance seems decent so far. See OpenGL SuperBible 7th Edition if you are interested in more info on how it works. Or just look at the books github SSAO shader is in the media-pack
25 / 44
Random Image
Heightmapped terrain, kind of neat how quick and at what large scale a simple heightmapped terrain can become, q3tourney2 (entire level) is the small thing at the top for scale. It's over a gigabyte in size when exported as an OBJ with 11,148,642 triangles and 5,579,044 vertices generated from a 2362x2362 image (think source heightmap is of mountains in new zealand)
26 / 44
Random Image
Earth gif, so this is an Icosahedron with triangle subdivisions, each frame there are more and more triangles representing the sphere, starting with 20 vertices and ending with 5,242,880, eventually I'll add a heightmap and create mountains on the planet and then add rayleigh scattering (makes the sky blue) which has a cool earth surface to space transition effect -- a good resource for icospheres is -- I actually found the sample code *after* getting it to work, but here that is to save others some headaches: subdivide.c
27 / 44
Random Image
Very rough smoothed particle hydrodynamics simulation, still needs general improvements as well as generating a surface from the particles, old school approach would be to use marching cubes, but there seems to be newer screen space surface generation techniques that are faster. (eg: Simon Green Screen Space Fluid Rendering) Use's Monagan's approximations of the navier stokes equations and grid based sorting of particles to avoid O(n^2) computations for finding neighbors. Performance is a huge bottleneck for SPH as generally you are limited by the maximum amount of particles you can simulate in real time (and everyone wants more particles -- right click to add some). AMD has a good background video for those interested -- And Muller's paper makes some good points and is worth looking at
28 / 44
Random Image
de_dust2 from counter strike: source being rendered as lines, source engine bsp files use a quake2 style edge list to store faces (relic from software rendering days) that I need to triangulate to render normally, that and I need to figure out how to get the textures loaded still -- It'll take some time to render properly and get collision detection working like I do with quake3 bsp's
29 / 44
Random Image
Screenshot of my engine software rendered, sutherland hodgman clipping still needs work, but fps is decent. Getting perspective correct texture mapping was a bit troublesome, you interpolate 1/w, u/w, and v/w, I was using 1/z, u/z, and v/z, which would have worked if I used the Z before perspective division, not after. (same goes for W, but after division it becomes 1.0, so pretty obvious) -- Updated from old screenshot that had z-buffer issues
30 / 44
Random Image
This is a windows GDI version of the old from Marco (original used an old version of allegro, which still works, but is pretty old) It accompanies an old car physics paper that describes the basic simulation of vehicles which is still very much relevant today. You can find the original version here:, Which was backed up by a guy that made a javascript version of it. I'll eventually make a 3d version of it and slowly deal with things like inclines, collision detection, leaving the ground etc -- Below is the begining of the 3d version, transformations are bad currently, and it's using old style fixed function opengl for now
31 / 44
Random Image
Started messing with spring mass systems, mainly to get a good suspension system for car simulation, but started down the path of cloth simulation... One spring mass, leads to multiple springs connected to each other, which eventually leads to a fabric of springs. To get good cloth though you need a lot of connections, a center point needs to connect up, down, left, right "structural springs" and diagonally "shear springs" and if you skip one left right and up down "bend springs" give you some better self bending properties. -- Pain in the behind to set all those springs up though -- nvidia paper on the layout.
Note: I made the mistake of putting the points and mass into the spring class, what you really want to do is have seperate class for particles/points with mass and add the spring forces to those which will do the integration. (you can add pointers to the particles into the spring class) This also makes the spring class too simple, as it doesnt deal with gravity, position, velocity, acceleration, etc and essentially just does force_spring = -k * displacement, but makes multiple springs with a single mass much easier to handle
32 / 44
Random Image
Cloth simulation on a flag in engine, I was going to add more points to the flag to get better results, but thought to myself I better look at it as points to see how many I've already got, 20x40 points looks a lot more dense than I thought, triangle wise, front face and back face that's like 1482 triangles, per side, or 8892 vertices, which is a lot for a flag, coming close to the entire triangle count of the map... so maybe less is more here as in most cases
33 / 44
Random Image
Normal mapping, had this working a long time ago along with displacement mapping, but never left it permanently enabled, now it's easily toggleable from the console. My non-lightmap per pixel lighting is still a bit odd though, (changes a bit when rotating the camera about) I'll probably re-add displacement mapping again too, but need to go through all the normal maps and add height info to the alpha channel again. Note: I've extremely exaggerated the normal mapping effect here, I've always felt like it makes things look too 'plastic' but with a good artist I think it can be a great visual improvement while still being subtle. -- Brief explanation is that instead of using a single normal for the entire triangle, you encode the normal in a texture and perform the lighting per pixel using the texture's normal which approximates the true surface normals (normal means direction away from the surface)
34 / 44
Random Image
So this is an Intel 8080 Emulator running Space Invaders based on's tutorials.

Originally I started with their sites example, 8080emu-first50.c, got it running and was like, hey, this isn't doing anything -- it printed instructions, but didn't render anything or do much else. I got the rom files from this page: and started making a hybrid 8080 from the two code bases combined, but it didnt work. So I just got the LemonBoy example to compile unmodified and it sort of worked, the image was sideways and the input keys didnt work. So I fixed the sideways image and eventually realized, no, that code doesnt work at all as aliens were in the wrong places and weird stuff was happening.

But from that code I figured out the video memory locations and how rendering worked pretty well, so I went back to the first example with my new found knowledge. The 8080emu-first50.c ran and showed the intro screen now with the drawing code, but then infinite looped after a while. -- Turns out it needs the video screen interrupts implemented for it to get out of that state. Got the interrupts working from emulators101 Cocoa example and interrupt page, but then needed to get the input working so I could play. Referencing the broken example, the emulator101 page, and the cocoa example, I got the controls working eventually. -- But for some reason making the input work stopped the aliens from drawing at all, fixed a little bug there (wasnt setting register A after the OUT instruction) and then it finally worked completely. (Note the above mentioned 8080emu-first50.c example is missing a lot of opcodes that you can get from the cocoa example)

Currently the left side of the screen is rendering the working memory, change the VRAM_START 0x2400 to fix that if you'd like, but I left it as is. I take no credit for the code as it was mostly from emulator101's public domain example. But serves as a good basis for something like the z80 gameboy or a full 8080 emulator -- Oh yeah, left/right moves, space shoots, shift is start button, control adds a coin. So to start the game press Control to add a coin, then shift to press Start, and you'll start playing
35 / 44
Random Image
Just some sprites moving about in MS DOS graphics Mode 13h. (320x200 stretched to 320x240), compiled with TurboC++ DOS.C might make a spinning cube or something later. But would like to get VESA modes going, (higher resolutions) but they are a super pain in the behind to set up compared to VGA modes

-- Managed to get VESA going from old Hello VBE! Example SVGA.C

-- Did the same thing now in 1024x768 using the above example, sprites blink because I am writing to the video ram twice, (clear and sprite draw) really need a buffer that is then blitted, but 16 bit mode can't allocate 768k sadly DOS2.C

-- Here's another Vesa example that supposedly is faster as it uses a linear frame buffer that compiles under WATCOM using DOS4GW in 32bit mode (DOS2.C uses 64k memory with 'bank switching' which remaps that 64k around, which is slow as it requires an interrupt)

-- And finally, a LFB version of the same darn thing that has good performance in high resolution and clears and such without flickering DOS3.ZIP
36 / 44
Random Image
3D spinning software rendered cube in 32 bit MS-DOS at 1024x768 256 color, frame rate is kind of slow, better at lower resolutions, but then it looks bad. Being slow is actually a good thing here as it allows you to notice performance improvements to the rasterization easier. (That said this is the older affine textured barycentric triangle code from the windows example modified to DOS) Code is messy, but hey it works DOS3D.ZIP compiled with WATCOM
37 / 44
Random Image
DOOM level viewer, just draws the vertices and linedefs, (DOOM is completely 2d if you didn't know, each line is extruded to have floor height and ceiling height) took like less than an hour I was surprised how easy it was to get together. Arrow keys will pan, pgup/pgdn zooms, keypad plus and minus change level, you'll need the visual studio 2015 redistributable installed if you dont already -- went ahead and added map objects (things lump) in red

-- - this one does the BSP traversal, press F1/F2 to enter the BSP mode (and increase/decrease the traversal depth), the BSP dividing planes are labeled P0, P1, P2... etc then space to mask axial bsp planes that you are on the inside of, (angled lines are a bit more trouble to mask with basic GDI functions) blue lines are the segs of your sector/cell, multicolor lines (dashed or solid depending on the side you are on) are the BSP planes. If you want to see how in general the BSP divides the level, hold F1 until you get to a minimal depth, then hold space and increase the traversal depth with F2. The mouse cursor is view/player position. F3/F4 will enter the draw order mode and change the max subsegments drawn to show the draw order (near to far) Typically you would throw away anything outside your view frustum so you wouldnt spend time attempting to draw things that are behind you. Hold F3 then when you get down to one draw segment, press F3 to increase the draw limit to see the way the draw order would be sequenced. (Hope all of that makes sense)

-- this one just add crappy line projection code, walk around with the arrow keys, will improve later. Kind of never finished this, but hey it has font rendering!
38 / 44
Random Image
Wolfenstein 3D style Raycast code from modified to run on windows, currently flickers a bit as I havent messed with it much, but just sticking it here so I can look at it later went ahead and added textures and wolfenstein style two tone floor/ceiling
39 / 44
Random Image
Bisqwit's NES Emulator (MOS 6502 style processor) ported over to windows, was messing with his doom style portal renderer, got that to work in linux (visual studio wont compile it due to a lot of non standard GCC stuff in the code) and started looking at the emulator shortly after, works in linux, and was surprised it compiled in visual studio 2015 without much trouble, threw away the SDL stuff, added win32 sound output and keyboard input, and made a small suggested mod to make the NTSC output look better. Here is the single source file and visual studio 2015 debug binary, code is ugly, but most emulators are. Note: when compiling it worked in debug mode, but not release, probably some bad optimizations or something there. --- have it hard coded to run "test.nes" currently, no roms included due to copyrights etc. Bisqwit video1 video2 and faq Original source code here I got from this decoded article -- Note: Music was perfect in megaman 2, but super mario had stuttering gameplay for some reason, so I sacrificed the music quality for gameplay, press keypad add/subtract to change the game speed (delay per 10k instructions) Arrow keys move, enter is start, space is select, and Z and X are A and B respectively
40 / 44
Random Image
This is a windows version of bisqwit's doom style portal renderer, original source code here: prender.c and accompanying video: here. He also has a more advanced version that has textures and generates a 4.5GB lightmap file from them, if you are interested check the video description and run it on linux. -- I'm going to use this style of rendering to draw doom maps from the wad file, this tech is a bit closer to duke3d / the build engine as it supports things like a room being on top of another room as well as non-euclidean geometry through the use of portals. (think of a tiny closet opening into a giant cathedral type stuff, under the stairs you might notice it a bit) Originally I was just going to create a triangle mesh from the doom maps and render them with OpenGL/D3D, but doing ear-clip triangulation of the sectors was becoming a pain so I figured might as well use the data structures as intended. Doom renders individual vertical columns of pixels at a time from the projected wall lines similar to wolfenstein raycasting. It treats floors and ceilings as a sort of oblique planes that dont have any real geometry to them that it calls visplanes. Compared to wolf3d, doom supports the additional ability to render floors/ceilings/walls at different heights and at angles, but no ramps or sloped ceilings can exist. You also can't look/up down as the renderer relies heavily on the assumption that walls will always be verticle lines for speed reasons (Intel 486 era CPU's with 4MB ram) The display might be a bit glitchy as I'm not really syncing the page flip and the colllision detection seems off when compared to the linux version, but it works. WASD to move, mouse to look about -- note modified it quite a bit to make it subjectively cleaner -- although if you want something closer to the original prender_win32.c is still laying about
41 / 44
Random Image
So this isn't my code, but I remember seeing this old cave demo a long time ago and thinking these guys are doing it right. Didn't realize it at the time, but it was Ken Silverman of Duke3D / Build engine fame and Tom Dobrowolski who I think worked off Ken's initial engine prototype. So think of a voxel like a 3d pixel, the smaller the block, the better the fidelity. Issues that you run into are data sizes (as a 3d array of small blocks gets very large very quick) Animation, (ends up becoming frames of blocks, which dont look very good unless again you have small blocks) and the difficulty of using traditional 3d hardware, which is set up to transform and raster triangles. So far other than worms3d I havent seen many voxel based games out there. A more modern engine, Atomontage, brings things further than this example. But this example has source code designed to run on older machines (Pentium III's) where atomontage sort of duped investors into thinking there would be a ROI on their investment and squandered their money while keeping the code closed source. See Ken's page for the source and cavedemo seen here: -- Note to compile this you'll probably need an older visual studio -- might want to cruise github to see if anyone has already brought this code into the present -- also dont confuse voxels with minecraft, which uses the GPU to render a lot of cubes, like voxels, but made of triangles. One more thing, Kevin Bray has a pretty cool sparse voxel octree demo here, sparse meaning dont store all voxels (only outer visible shell) and use an octree to compress storage by using bigger voxels where everything is uniform. In terms of rendering, the voxlap demo does something slightly more advanced than wave surfing to allow six degrees of freedom. -- So I was thinking you could probably use a DDA type line drawing algorithm in 3d to raycast voxels, and googled about and found this video which does exactly that: I took his code and took out all the borland stuff and got it going in visual studio He is using the voxel file format from Ken's demo, and it ends up using 4GB (1024 * 1024 * 256 blocks at 16 bytes each) just for the voxel data structure (compile as 64-bit to prevent malloc failing) the code is using opengl which I compiled with glew, but is essentially just using it as a frame buffer, so I'll go back in later and take OpenGL out as it's not needed. Still looks kind of low res, but the code is interesting and compact, theoretically if you get the algorithm right it would look like Ken's demo, but without all the license restrictions -- one thing I noticed though is that it doesn't set interior voxels as solid, so things are a hull only (ie: pressing x to shoot should make tunnels) -- Version without OpenGL -- throw threads into this and I'm sure it'll look much better
42 / 44
Random Image
So is a neural network digit classifier. Tab will load a random MNIST number, Enter will predict what it is, space will clear the screen. You can draw with the mouse, but results arent great for stuff that wasnt in the learning set. (could also be that I dont have much of a fade effect on the pen, which was added later) Most of the neural network code was taken from here:

A good written intro to Neural Networks is here -- and a few youtube videos (which got me started on this whole thing) are here and here

-- PS: is a version using EMNIST dataset for alphanumeric characters -- these currently only do one epoch and the neural net is pretty basic, so again accuracy isnt great. I may swap out the network with a convolutional neural network later to improve accuracy

-- Later has happened, is the mnist digit recognition using the convolutional neural network from here: Accuracy should be a lot better, his code (unmodified) will Learn then start processing a ppm file every second. Mine will let you draw directly on the window after learning has finished. Press tab to load an image from the dataset, space to clear the screen, and enter will process the image buffer. Learning takes a bit longer with these CNN's, but it's much more accurate, will throw together an alphanumeric version here is a bit. Technically learned data can be saved to disk and loaded to stop the annoying startup wait, but that is not implemented in these examples

-- Finally, which is the same Convolutional Neural network (CNN) from, but working on the extended MNIST dataset giving alpha numeric classification. Seems to work pretty good, but I find that it often doesnt try when it gets confused, ie: 100% confidence or 0% confidence, few cases in between. This probably isnt the "best" CNN, as you can go deeper etc, but at that point you may want to start using multiple cores and GPU's with TensorFlow and python, but it seems to work somewhat decently

-- This one works on the CIFAR-10 data set (small images) download the data seperately and drop it in the data directory CIFAR-10 binary version (suitable for C programs) (again not great at classifying)

-- I would do CIFAR-100, but since it has super classes and classes I think it would be nice to have a nerual net to determine the superclass and then another neural set for each sub class group eg: one nerual network determines it's a flower, then another neural network determines the flower type - -- So the second one is the approach I suggested, it has a *huge* training time, the first one just runs a neural net on each superclass and I select the one with highest confidence. -- I would say both are very bad at cifar100 though, but that leaves room for future improvement
43 / 44
Random Image
So, in digging up old code, these three images are from very early fixed function OpenGL messing about I did. I learned the WinApi/MFC when I was in highschool (read petzolds books over the summers) before college and eventually made the hacked version of AUXDEMO.C from an old MSDN reference page. The next two are *much* later in terms of timeline (after OpenGL Superbible 3rd edition was released) and are variations of sphereworld from the book with basic camera movement (forward back, left right, just basic horizontal rotation as 3d rotation without gimbal lock wasn't really easy for an 18 year old me (as you'll notice in the physx example) Last image is motion blur using the Accumulation Buffer, which I really never liked because you have to render the motion blur "backwards" if that makes sense. Timeline is probably sometime between 2002-2004
44 / 44
Random Image
Figure I should add a screen shot of dynamic lighting in engine, light values probably need to be toned down a bit and some global shadows would make things cooler, but dynamic lighting really adds a nice wow factor to things, when shadow mapping is enabled I really like seeing my own shadow too when walking about.

I know what you are thinking... this webpage looks horrible, but that's okay, pretty ones usually dont validate ;o)