It means that what you're watching (or just watched) is a product of a file whose size doesn't exceed 64 kilobytes (or well, kibibytes), or 65536 bytes.
This is a rather small amount of data. For comparison, a 4-minute song recorded in MP3 format can be between 4 and 10 megabytes; 64 times the size. A high definition video accompanying that song could be anywhere between 35 to 50 megabytes; more than 500 times the size. Even the smallest known storage media on a PC, floppy disks in the 80-s, were capable of storing more than a megabyte; our intros could fit on them more than a dozen times.
Unlike a traditional animation where you're using a general purpose media player to watch a video file, an executable itself is a specialized piece of software running on your computer.
Think of it as an application, like a game or a calculator or a word processor. But instead of allowing you to do a variety of things at once, the only "feature" it has is to show you a few minutes of pretty things.
"Realtime" means that instead of having each individual frame of an animation stored pixel-by-pixel in a series of static images, the visuals of an intro are created (rendered) as you're watching them.
From a technical perspective, what this means is that while in a traditional "pre-rendered" animation, the rendering software can spend anywhere between minutes to hours to draw one frame of the final image sequence, our intros have to keep up with the speed of the traditional film frame rate (i.e. 24 frames a second), which leaves them only a fraction of a second to be able to fully produce a high quality image.
This essentially means that the intro is being "performed" by the computer as you're watching it. Imagine it like watching a film vs. watching a theatrical production: In the latter case there's the potential for problems, sometimes things can happen a bit slower or faster than normal, or maybe there's some improvisation and it looks different every time you go to see it.
There's no easy answer to this question, especially if you're not familiar with how code and data works, but the slightly simplified version is that sometimes the final result of a process (e.g. an image) is considerably larger than the steps that are required to recreate that process. What we're able to do with our toolset is to create complex imagery from relatively simple mathematical steps, and then only store these steps in the final binary.
If that sounds a bit dry, here's a relatively mundane example:
Imagine you take a sheet of paper, and draw a red circle in the middle of it. Now you want to show it to a friend who is in another city: taking a photo or a scan of that picture would result in a static image that can go up to megabytes in size. Instead, you send your friend the paper size, circle position and size, and pencil color: all of these fit in a single text message, and your friend can faithfully replicate the picture you drew.
Now imagine you get a robot that can draw you simple things; circles, squares, squiggly lines, gradients, and so on. You get the robot to draw a cool picture, and again instead of sending your friend a photo, you just send them whatever you told your robot, and they can reproduce that picture using the same steps, if they have the same robot.
What we do is design simple "robots", which are in our case pieces of code that can draw or make sounds, and then try to find the steps that end up creating the most vivid imagery and soundscapes. Once that's done, the resulting code and data should be small enough to be possible to be crunched into 64kb.
Generally there's nothing to worry about; please read this for further information.
No, it's apples and oranges. To go with the analogy above, robots are very good at geometric things: rectangles, circles, wobbly lines, synthesizer beats, and so on. However, a lot of the content commercial games use comes directly from humans: hand-painted textures, models of living things, voiceovers, motion capture performances, and so on. These things don't lend themselves well to our process.
Now, whether you can create a more abstract / sci-fi looking or sounding game from scratch with these techniques is a whole different ballgame; we know some people who tried and they don't reminisce those times fondly for various reasons. That's not to say it's impossible, but it's not a fun process, and there is very little to gain from it apart from the gimmick-factor.
Not in our opinion: We consider the operating system (and with it, the drivers and other API bindings) an integral part of the computer, something that is necessary to be able to operate the hardware that is inside the machine. The more powerful the hardware becomes, the more complex the OS and drivers have to get to be able to handle it, and the more reliant us developers are on something that's a solid foundation to build on.
We actually use very little from any given operating system: we access the graphics API, the sound API, sometimes use some of the stock fonts, as well as a few helper functions every so often, and that's about it. We certainly can't pull out high quality textures from the depths of a Windows system folder.
We mostly use C/C++; the largest part of what we do is data-management and abstraction, so we needed a higher level language in order to keep our sanity. Also, compilers today are clever enough to be on par or better than humans when it comes to optimization.
That's not to say that what we do doesn't require a fairly good understanding of the code that is produced by these compilers, but most of the optimizing we do is logical or algorithmic; instead of trying to rewrite a section of code to spare an instruction and win a few bytes, we often win larger amounts of space by rearranging our data structures and eliminating redundancies.
Traditionally, the 64 kilobyte category was formed because at the time, 64k was the upper limit of a COM file - this was due to the segment size limitation in a certain style of programming at the time. The technical limitation was of course abandoned later, but the arbitrary upper limit stuck.
Now, from our perspective, we enjoy 64k because it's an odd limitation where it's really not much, but at the same time it's more than enough: it gives you enough room to be able to produce interesting content, but not enough to do whatever you want. This way a lot of your creativity is put to the test, when you have to figure out what IS even possible in 64k, and then try and go beyond that.
There's a great interview with one of our old friends, ryg from Farbrausch, where he explains that in his view (and in ours), 64k is a great exercise in producing tight bundles of code and art, where there's really no room for gratuitous libraries or unnecessary abstraction - you constantly have to watch and plan what you're doing, and every line of code or piece of content has to be accounted for. It's the kind of programming and artistic exercise that really appeals to us.
As you can read above in the bit about video games, very little of this is applicable to commercial technology. Every now and then there are attempts to monetize it, but it rarely ever works out, mostly due to the unpredictability and artist-unfriendliness of the tech.
On a more personal level, we just do this for fun; it's our little escapism from our dayjobs where finally we don't have to care about the economic consequences of our actions. We do sometimes win a bit of prizemoney at demoparties, but it's nothing to phone home about in comparison to the time we've invested in making the demos, as far as an hourly rate is concerned - but that's fine, it's not why we do it.
Money ends up being spent. Trophies are forever.
Yes, we know, thanks for pointing that out repeatedly.