Do you remember this?

E.T. for the Atari 2600

I don't. I wasn't around for this game. But if I did remember it, I still wouldn't remember it like that. No, I'd remember it like this:

Aw yeah baby. Look at that color bleed. Look at that blurry mess. No really, look at it:

Now that's what I'm talkin' about baby. I should try this E.T. game too, it looks pretty good.

But you might be wondering: if I didn't grow up with the Atari 2600, how do I remember this sort of aesthetic?

Well, my generation grew up with these depressing abominations:

A plug n' play game

And these things always used composite cables. Composite cables are absolute garbage because they use a single signal to transmit video:

Composite cables
Jesus christ

This is a big problem, because images are composed of three values: red green and blue (or RGB). On the web, you will see a fourth value: "alpha" or transparency, but that doesn't apply right now.

So you're sending three values down a single wire, leaving it up to the device to process the video and disentangle these color values. And that means that the picture usually looks like this:

The nightmarish Spongebob plug n' play game

Okay, to be fair, I simulated that look. It wasn't usually that bad. But it was pretty awful.

Let's replicate that!

Oh hell yeah. 🤘

Recently, I wanted to create a mock Atari 2600 game in the browser using the Canvas API. But I realized that it just wouldn't be the same without the horrible video signal. So I wrote some code that applies these distortions to a canvas:

If you press the "reset" button, you can see that the original image is very clean. All of the distortions are being done in real-time using pixel manipulation. You can play with the sliders to produce a more or less distorted image.

Update: I added scanlines as well!

Cool. How?

The code required to do this is surprisingly brief. Most of the code in the fiddle above boils down to registering event listeners on some elements.

The animation loop looks like this:

const animateDistortion = () => {
  requestAnimationFrame(() => {
    const shouldDistortHorizontalSync = Math.random() < desyncFrequency / 100;

    distort(shouldDistortHorizontalSync);

    animateDistortion();
  });
};

Let's ignore the horizontal sync stuff for now. Basically, we call distort which performs all of the pixel manipulation, then we just call it again using requestAnimationFrame.

Why requestAnimationFrame? Why not setTimeout? If you're not familiar with this function, the difference might not be obvious. In the past, web developers would use setTimeout for things like this, but requestAnimationFrame has many advantages over it when it comes to performing animations. It automatically executes at the correct refresh rate on a given device, and it will not perform when the user navigates to a different tab ( setTimeout, on the other hand, would happily chew through some CPU cycles in this scenario).

So how do we distort the image?

const distort = function (shouldDistortHorizontalSync) {
  ctx.drawImage(img, 0, 0);
  const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
  const data = imageData.data;
  let desyncLineIndex;

  if (shouldDistortHorizontalSync) {
    desyncLineIndex = Math.floor(Math.random() * canvas.height);
  }

  for (let i = 0; i < data.length; i += 4) {
    const scanlineBrightnessAdjustment = 
    	Math.floor(i / 4 / canvas.width) % scanlineSize === 0
      ? scanlineIntensity
      : -scanlineIntensity;
      
    data[i] = data[i + redChannelOffset * 4] + getNoise() + scanlineBrightnessAdjustment;
    data[i + 1] = data[i + 1 + greenChannelOffset * 4] + getNoise() + scanlineBrightnessAdjustment;
    data[i + 2] = data[i + 2 + blueChannelOffset * 4] + getNoise() + scanlineBrightnessAdjustment;

    if (
      shouldDistortHorizontalSync &&
      shouldOffsetThisPixel(i / 4, desyncLineIndex)
    ) {
      const desyncOffsetPixels = 4 * desyncOffset;

      data[i - desyncOffsetPixels] = data[i];
      data[i - desyncOffsetPixels + 1] = data[i + 1];
      data[i - desyncOffsetPixels + 2] = data[i + 2];
    }
  }
  ctx.putImageData(imageData, 0, 0);
};

Uhh... whoa.

Yeah, this looks a little crazy at first. It might benefit from extract function. But let's take it in pieces:

ctx.drawImage(img, 0, 0);
const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height);
const data = imageData.data;

First, we use the canvas' context to draw out the original, unaltered image. Then, we get the image's data. We need the raw image data so that we can manipulate each pixel.

We are going to perform a set of distortions here:

  • Adding some horizontal desync (i.e., shifting a group of lines to the right. This is also known as screen tear)
  • Add some scanlines
  • Shifting the red, green, and blue channels of the image (this is also known as color bleed)
  • Adding noise to the image to achieve a grainy effect
  • Blurring the image

Determining whether we should add screen tear during this frame

Before we called this function, we checked whether we should add any horizontal desync during the upcoming frame:

const shouldDistortHorizontalSync = Math.random() < desyncFrequency / 100;

If the user picks a desync frequency of 100, then we have Math.random() < 1, which will always be true since Math.random() generates values in the range 0-1 (with 1 excluded). This allows the user to set a desync frequency which essentially determines the percentage of rendered frames that will contain a desync distortion.

This is why we have this chunk of code in our distort() function:

let desyncLineIndex;

if (shouldDistortHorizontalSync) {
	desyncLineIndex = Math.floor(Math.random() * canvas.height);
}

If this is a line that should have some screen tear in it, we calculate a random line number at which we will perform the distortion.

Adding scanlines

Adding this effect is fairly easy:

const scanlineBrightnessAdjustment = 
    	Math.floor(i / 4 / canvas.width) % scanlineSpread === 0
      ? scanlineIntensity
      : -scanlineIntensity;
      
    data[i] = data[i + redChannelOffset * 4] + getNoise() + scanlineBrightnessAdjustment;
    data[i + 1] = data[i + 1 + greenChannelOffset * 4] + getNoise() + scanlineBrightnessAdjustment;
    data[i + 2] = data[i + 2 + blueChannelOffset * 4] + getNoise() + scanlineBrightnessAdjustment;

The key piece of logic is Math.floor(i / 4 / canvas.width) % scanlineSpread === 0. We divide the index of the color value by four because we have four color values per pixel, and we want to know what line the pixel is on. By dividing it by the width of the image, and rounding down to a whole integer with Math.floor, we get a line number. We then mod this with the scanline spread – when the remainder is zero, we brighten this line, When the spread is 2, then, we brighten every even line and darken every odd line. When the spread is 3, we have two dark lines for every bright line. When it's four, we have three dark lines for every bright line, etc.

.Adding some color bleed

Now we have to march through the image data. Image data on the canvas is given to us as an array of values in the form [pixel1RedValue, pixel1GreenValue, pixel1BlueValue, pixel1AlphaValue, pixel2RedValue, pixel2greenValue, pixel2BlueValue, pixel2AlphaValue...] . Each pixel in the original image has been pushed into this array as its four separate color values. So we use a loop that iterates by 4, allowing us to step through the original image, pixel-by-pixel:

for (let i = 0; i < data.length; i += 4) {
    data[i] = data[i + redChannelOffset * 4] + getNoise();
    data[i + 1] = data[i + 1 + greenChannelOffset * 4] + getNoise();
    data[i + 2] = data[i + 2 + blueChannelOffset * 4] + getNoise();
    . . .

Given this structure, we can see that data[i] is the red value of the new pixel we're drawing. data[i + 1] is the green value, and data[i + 2] is the blue value. We don't touch the alpha value for this.

We add a value of channelOffset * 4 to each channel. We multiply by 4 because this will always put us at the next pixel's respective color value.

For example, we have data[i + 1 + greenChannelOffset * 4]. Since data[i + 1] gives us the current green value, we can add 4 to get to the next pixel's green value, since we know there are four color values for each pixel. If we add 8, we have the green value of the pixel after that, and so on. The first green pixel in the image is stored at index 1, the next at index 5, then 9, then 13, etc.

What we are effectively doing is shifting each color value to the next pixel, or the pixel after that, or whatever we have set. So if we set a red offset of 5, for example, then all of the red values will be shifted over 5 * 4 = 20 indices in the array, meaning that all red values will move over 5 pixels. This causes our color bleed effect.

This shifts color values to the left. Can you see why?

Given that we shift red pixels by five, we're effectively saying: data[i] = data[i + 20] . At index zero, this means that data[0] should equal data[20], which causes data to the right to be shoved to an earlier spot in the array.

I hope that makes sense!

Adding noise

Adding noise is much simpler, I promise! At the end of each of the lines we discussed above, we have a call to getNoise():

data[i] = data[i + redChannelOffset * 4] + getNoise();
data[i + 1] = data[i + 1 + greenChannelOffset * 4] + getNoise();
data[i + 2] = data[i + 2 + blueChannelOffset * 4] + getNoise();

This function is dead simple:

const getNoise = () => {
  return Math.random() * noiseMax;
};

So, as the user cranks up the noise slider, we just add higher and higher randomized values to each image color value.

Adding the screen tear

After adding these distortions, we add some screen tear:

if (
      shouldDistortHorizontalSync &&
      shouldOffsetThisPixel(i / 4, desyncLineIndex)
    ) {
      const desyncOffsetPixels = 4 * desyncOffset;

      data[i - desyncOffsetPixels] = data[i];
      data[i - desyncOffsetPixels + 1] = data[i + 1];
      data[i - desyncOffsetPixels + 2] = data[i + 2];
    }

First, we check whether this pixel should have screen tear applied to it. Remember how we calculated shouldDistortHorizontalSync before using Math.random()? If that's false, we don't apply any desync this frame. If this is true, then we need to check if this pixel in within the line that we are tearing using shouldOffsetThisPixel(i / 4, desyncLineIndex).

This function takes the

const shouldOffsetThisPixel = (pixelIndex, desyncLineIndex) => {
  const scanLineIndex = Math.floor(pixelIndex / canvas.width);

  return (
    scanLineIndex >= desyncLineIndex &&
    scanLineIndex <= desyncLineIndex + desyncLineSize
  );
};

We need the pixelIndex, that is, the number of the pixel in the original image that we're working on. Note that we passed i / 4 here, since, for a given pixel in the original image, we have four values in the array. This means that value 0 in the array is pixel one, but so are values 1, 2, 3, and 4. Value number 5 is pixel 2, and so are values 6, 7, and 8.

Math.floor(pixelIndex / canvas.width) tells us which line in the image we are processing. We check whether the line we're processing matches the number of the line we passed in (the line we calculated earlier as the screen tear line). If it is, we know we need to offset this pixel's position to achieve the screen tear effect:

if (
      shouldDistortHorizontalSync &&
      shouldOffsetThisPixel(i / 4, desyncLineIndex)
    ) {
      const desyncOffsetPixels = 4 * desyncOffset;

      data[i - desyncOffsetPixels] = data[i];
      data[i - desyncOffsetPixels + 1] = data[i + 1];
      data[i - desyncOffsetPixels + 2] = data[i + 2];
    }

We multiply the desyncOffset by four to determine how many array indices we need to shift our values over. This time, we subtract this value to overwrite previous pixels in the array, since we need to have calculated all the other distortions before we shift these pixels.

Finally, we draw the image data:

ctx.putImageData(imageData, 0, 0);

What about the blur?

Oh yeah, that's easy! Since the canvas is an HTML element like any other, we can apply a filter to it:

blurEl.addEventListener("change", () => {
  canvas.style.filter = `blur(${(blurEl.value / 100) * 2}px)`;
});

This gives us a maximum of 2px blur. Anything above that looks a little too much, in my opinion.

Sweet! Now we have a really messed up image. I love it.

You can view the full source on JSFiddle.

What next?

My original intent was to try to make a clone of Kaboom! for the Atari 2600, but I wanted to start with the graphical filter for the sake of realism. So, next up, I'd like to apply this to an actual game!

I suspect there are also some ways to make this code more efficient, so I'll be re-visiting it soon.

Keep tearin' and bleedin' friends. 😈