This patch adds the platform agnostic media selector and changes the way our themes behave as follows: If the default Firefox theme is selected, Firefox will match the system appearance (current default theme in light mode, dark theme in dark mode). Note that about:addons will continue to show “default” as the selected theme, even when it is technically using the dark theme under the hood to match the system’s dark mode. If any Firefox theme other than “default” is selected in about:addons, Firefox will not change themes when the system appearance changes.
This is missed in the release notes. I think this is true for macOS and Windows. I am not sure about other platforms.
US Customs Contractor Hack Breaches Traveller Images
US Customs and Border Protection (CBP) has admitted a data breach at a sub-contractor has compromised images of individuals and vehicles entering and leaving the country.
The controversial agency first learned of the “malicious cyber-attack” on May 31.
And we know this was a “malicious cyber-attack” exactly how?
“CBP learned that a subcontractor, in violation of CBP policies and without CBP’s authorization or knowledge, had transferred copies of license plate images and traveler images collected by CBP to the subcontractor’s company network,” it said in a statement.
“Initial information indicates that the subcontractor violated mandatory security and privacy protocols outlined in their contract.”
“Security by contract” isn’t a thing. And the data was breached … how?
CBP and the Transportation Security Administration (TSA) both fall under the Department of Homeland Security (DHS). Their collective track record on privacy, cybersecurity, and basic physical security leaves much to be desired.
Back to the breach! Thank goodness the CPB is now on the case. Per the Atlantic,
CBP claims they’ve already conducted a search, but haven’t found any of the stolen images on the dark web, where hackers sometimes trade post stolen information for sale. In its statement to The Atlantic, CBP said it’s working with law enforcement to continue the search and survey the full extent of the damage. It hasn’t yet commented on the scope of the breach or offered specifics on the data that was stolen. Perceptics did not immediately respond to a request for comment.
CBP would not say which subcontractor was involved. But a Microsoft Word document of CBP’s public statement, sent Monday to Washington Post reporters, included the name “Perceptics” in the title: “CBP Perceptics Public Statement.”
Perceptics representatives did not immediately respond to requests for comment.
CBP spokeswoman Jackie Wren said she was “unable to confirm” if Perceptics was the source of the breach.
This whole thing – from prevention to protection to monitoring to response to recovery – was manageable. Yet another takeaway is that CPB has no Incident Response Plan (IRP) at its most basic level. How do we know? They would not have sent the press a Word document titled with the name of the vendor that is the source of the leak.
It also throws into question the whole idea of a “malicious cyber-attack”. It seems more likely Perceptics, the alleged source of the data leak, failed to safeguard the data their contract said they shouldn’t have access to yet somehow acquired from CPB without their knowledge.
Hanlon’s Razor says to never attribute to malice that which is adequately explained by stupidity. Maybe the corollary in this case is never attribute to “malicious cyber-attack” that which is adequately explained by opportunism met by trivial, if any, security? I merely speculate …
In anticipation of macOS 10.15 Catalina, I have changed my shell from bash to zsh. macOS 10.15 will use zsh as the new default, and I was pretty sure that things will break immediately unless I prepare – so I did prepare, and I found the transition very simple.
Switch if you want to switch. Follow your joy. I’m not going to tell you otherwise, though it is not a path I expect to walk in the near term. zsh is fine. I played with it several times. There is no compelling reason for me to switch.
AirPods. This is my favorite gadget in years, the first real VR/AR device that feels seamless … The freedom of wireless headphones feels similar to when I first used a laptop, wifi, and dockless bike share.
I cannot imagine how this is remotely true and Jason Kottke doesn’t elaborate.
I have a pair of AirPods. They are convenient but a regretted purchase. The device is not augmenting or virtualizing reality in any way. That is, until I take one Pod out of an ear and restart my music or podcast.
I used wireless headphones before the AirPods. They did what these do: play music; play podcasts; and do so without a wire (until charge-time).
APPS are useful because they help companies better engage with customers through prompts and notifications.
However, developing a good [ed: bold mine] app is expensive, time-consuming, and needs multiple iterations. Further, they’re mobile operating system (OS)-dependent, which means, most companies need to develop two apps — one for Apple’s iOS and Google’s Android phones.
Good is the operative word. PWAs go for the lowest common denominator.
This is why the demand for Progressive Web Apps (PWAs) is increasing rapidly. They’re everything that traditional, native mobile apps aren’t.
PWAs are based on web-browsers
they’re quick to build and deploy
seem to be safer than native mobile apps
As far as I know, there is little evidence to back this claim up.
and work on all kinds of mobile operating systems.
And they aren’t very good.
Wait, what’s a PWA again?
A PWA is an app that runs on your mobile’s browser (Chrome, Safari, etc) and doesn’t need to be installed.
Not only that, PWAs provide a full screen experience. They look and feel just like native or regular mobile apps – with an icon neatly sitting on your home screen and push-notification capabilities.
Thanks to help from ‘service workers’, PWAs work even if users are offline or on low-quality networks.
A service worker is a snippet of code, a script that runs in the background and helps a PWA function. It’s one of its critical building blocks. Service workers help PWAs do things like send notifications to users and stay up-to-date.
Service workers help provide an engaging experience while offline and ensure that your application loads quickly.
As a malicious actor, PWAs mean I only need to focus on one target.
Should all businesses get a PWA now?
Well, the Internet is of the opinion that you need PWAs to make life easier for customers.
Well, Google is of the opinion, for sure.
Irrespective of size, PWAs have provided great benefits to companies that have been early adopters of the technology.
If PWAs are so great, then let users know they are about to install a PWA versus a purpose built app. And let them have the choice to run them in the browser instead of as a fake app.
My complaints about PWA largely echo my issues with Electron apps on the desktop: it’s based off of a lie where the user doesn’t know they are vulnerable to reported web security issues because there is no transparency to the user. Neither is a native application technology.
Overall, it seems as though demand from customers for faster experiences on mobile is driving up the demand for PWAs, and this might continue to grow in the future as the technology can support WebVR, an intelligent and modern way for companies to deliver VR content to customers.
Bringing VR or AR into the discussion makes the PWA push less attractive.
Although the article, by Seth Kenlon, is advertised as considering the question “Why (prose) writers should use Git,” I think the more important takeaway is that writers should embrace plain text. Kenlon makes a persuasive case that authors would be better off trashing their word processors and using a combination of a text editor and Markdown.
Kenlon’s text editor of choice is Atom (although he does mention Emacs as an alternative), which is, I think, leaving money on the table. Other than the obvious but subjective judgment that Emacs is a better, more customizable editor, it is virtually universally acknowledged that Magit is the best Git interface—integrated or not—and that Org mode markup is superior to Markdown, especially when its Babel interface is taken into consideration.
Of course, those are the opinions of an Emacs partisan so others may disagree but it’s hard to see how one can argue about Magit or Org mode. In any event, the important point stands: embrace plain text. If you do any writing at all, you should take a look at Kenlon’s article, especially if you’re still using Word or one of its evil offspring.
Lots of folks love Twitter, of course, but at least for my purposes, RSS is a much better solution. A Tweet is a good way to discover that the latest version of Emacs, say, has been released but if you want thoughtful analysis a blog or technical article is a much better bet.
Orgused to have a powerful if clumsy template expansion system. It was killed off in version 9.2 and somehow I missed the news.
Change in the structure template expansion
Org 9.2 comes with a new template expansion mechanism, combining org-insert-structure-template bound to C-c C-,.
If you customized the org-structure-template-alist option manually, you probably need to udpate it, see the docstring for accepted values.
If you prefer using previous patterns, e.g. <s, you can activate them again by requiring Org Tempo library:
or add it to org-modules.
If you need complex templates, look at the tempo-define-template function or at solutions like Yasnippet.
None of the new options work any better or provide any more value than the old one. Forgot about finding out why the change was made. They do require me to redo 20 customized options, a task I do not relish.
My frustration is in the lack of transparency behind the change. I follow the org-mode mailing list and I not only missed the whole thing but can’t find the genesis.
I have a lengthy TODO.org of things I might eventually implement for Emacs, most of which are not exactly useful, are challenging to do or fulfill both conditions. A NES emulator fits all of these criteria neatly. I’ve kept hearing that they can run on poor hardware and learned that the graphics fit into a tiled model (meaning I wouldn’t have to draw each pixel separately, only each tile), so given good enough rendering speed it shouldn’t be an impossible task. Then the unexpected happened, someone else beat me to the punch with nes.el. It’s an impressive feat, but with one wrinkle, its overall speed is unacceptable: Mario runs with a slowdown of over 100x, rendering it essentially unplayable. For this reason I adjusted my goals a bit: Emulate a simpler game platform smoothly in Emacs at full speed.
Enter the CHIP-8. It’s not a console in that sense, but a video game VM designed in the 70ies with the following properties:
* CPU: 8-Bit, 16 general-purpose registers, 36 instructions, each two bytes large
* RAM: 4KB
* Stack: 16 return addresses
* Resolution: 64 x 32 black/white pixels
* Rendering: Sprites are drawn in XOR mode
* Sound: Monotone buzzer
* Input: Hexadecimal keypad
It’s perfect. Sound is the only real issue here as the native sound support in Emacs is blocking, but this can be worked around with sufficient effort. Once it’s implemented there’s a selection of almost a hundred games to play, with a few dozen more if you implement the Super CHIP-8 extensions. I’d not have to implement Space Invaders, Pacman or Breakout with gamegrid.el. What could possibly be hard about this? As it turns out, enough to keep me entertained for a few weeks. Here’s the repo.
First of all, I’ve located a reasonably complete looking ROM pack. It’s not included with the code as I’m not 100% sure on the legal status, some claim the games are old enough to be public domain, but since there are plenty of new ones, I decided to go for the safe route. Sorry about that. Cowgod’s Chip-8 Technical Reference is the main document I relied upon. It’s clearly written and covers nearly everything I’d want to know about the architecture, with a few exceptions I’d have to find out on my own. Another helpful one is Mastering CHIP-8 to fill in some of the gaps.
To boot up a CHIP-8 game on real hardware you’d use a machine where the interpreter is loaded between the memory offsets #x000 and #x200, load the game starting at offset #x200, then start the interpreter. It would start with the program counter set to #x200, execute the instruction there, continue with the next instruction the program counter points to, etc. To make things more complicated there’s two timers in the system running at 60Hz, these decrement a special register if non-zero which is used to measure delays accurately and play a buzzing sound. However, there is no specification on how fast the CPU runs or how display updates are to be synchronized, so I had to come up with a strategy to accomodate for potentially varying clock speeds.
The standard solution to this is a game loop where you aim at each cycle to take a fixed time, for example by executing a loop iteration, then sleeping for enough time to arrive at the desired cycle duration. This kind of thing doesn’t work too well in Emacs, if you use sit-for you get user-interruptible sleep, if you use sleep-for you get uninterruptable sleep and don’t allow user input to be registered. The solution here is to invert the control flow by using a timer running at the frame rate, then being careful to not do too much work in the timer function. This way Emacs can handle user input while rendering as quickly as possible. The timer function would execute as many CPU cycles as needed, decrement the timer registers if necessary and finally, repaint the display.
Each component of the system is represented by a variable holding an appropriate data structure, most of which are vectors. RAM is a vector of bytes, the stack is a vector of addresses, the screen is a vector of bits, etc. I opted for using vectors over structs for simplicity’s sake. The registers are a special case because if they’re represented by a vector, I’d need to index into it using parts of the opcode. Therefore it would make sense to have constants representing each register, with their values being equal to the value used in the opcode. Initially I’ve defined the constants using copy-paste but later switched to a chip8-enum macro which defines them for me.
The built-in sprites for the hex digits were shamelessly stolen from Cowgod’s Chip-8 technical reference. They are copied on initialization to the memory region reserved for the interpreter, this allows the LD F, Vx instruction to just return the respective address. When implementing extended built-in sprites for the Super CHIP-8 instructions there was no convenient resource to steal them from again, instead I created upscaled versions of them with a terrible Ruby one-liner.
For debugging reasons I didn’t implement the game loop at first, instead I went for a loop where I keep executing CPU instructions indefinitely, manually abort with C-g, then display the display state with a debug function that renders it as text. This allowed me to fully concentrate on getting basic emulation right before fighting with efficiency concerns and rendering speed.
For each CPU cycle the CPU looks up the current value of the program counter, looks up the two-byte instruction in the RAM at that offset, then executes it, changing the program counter and possibly more in the process. One unspecified thing here is what one does if the program counter points to an invalid address and what actual ROMs do in practice when they’re done. Experimentation showed that instead of specifying an invalid address they fall into an infinite loop that always jumps to the same address.
Due to the design choice of constantly two-byte sized instructions, the type and operands of each instruction is encoded inline and needs to be extracted by using basic bit fiddling. Emacs Lisp offers logand and ash for this, corresponding to &, << and >> in C. First the bits to be extracted are masked by using logand with an argument where all bits to be kept are set to ones, then the result is shifted all the way to the right with ash using a negative argument. Take for example the JP nnn instruction which is encoded as #x1nnn, for this you’d extract the type by masking the opcode with #xF000, then shift it with ash by -12. Likewise, the argument can be extracted by masking it with #x0FFF, with no shift needed as the bits are already at the right side.
A common set of patterns comes up when dissecting the opcodes, therefore the chip8-exec function saves all interesting parts of the opcode in local variables using the abbreviations as seen in Cowgod’s Chip-8 technical reference, then a big cond is used to tell which type of opcode it is and each branch modifies the state of the virtual machine as needed.
Nearly all instructions end up incrementing the program counter by one instruction. I’ve borrowed a trick from other emulators here, before executing chip8-exec the program counter is unconditionally incremented by the opcode size. In case an instruction needs to do something different like changing it to an jump location, it can still override its value manually.
To test my current progress I picked the simplest (read: smallest) ROM doing something interesting: Maze by David Winter. My debug function printed the screen by writing spaces or hashes to a buffer, separated by a newline for each screen line. After I got this one working, I repeated the process with several other ROMs that weren’t requiring any user input and displayed a (mostly) static screen. The most useful from the collection was “BC Test” by BestCoder as it covered nearly all opcodes and tested them in a systematic fashion. Here’s a list of other ROMs I found useful for testing other features, in case you, the reader, shall embark on a similar adventure:
* Jumping X and O: Tests delay timer, collision detection, out of bounds drawing
* Sierpinski triangle: Slow, tests emulation speed
* Zero: Animation, tests rendering speed (look for the flicker)
* SC Test: Tests nearly all opcodes and a few Super CHIP-8 ones
* Font Test: Tests drawing of small and big built-in sprites
* Robot: Tests drawing of extended sprites
* Scroll Test: Tests scrolling to the left and right
* Car Race Demo: Tests scrolling down
* Car: Tests emulation speed in extended mode
* Emutest: Tests half-pixel scroll, extended sprites in low-res
Debugging and Analysis
Surprisingly enough, errors and mistakes keep happening. Stepping through execution of each command with edebug gets tiring after a while, even when using breakpoints to skip to the interesting parts. I therefore implemented something I’ve seen in Circe, my preferred IRC client, a logging function which only logs if logging is enabled and writes the logging output to a dedicated buffer. For now it just logs the current value of the program counter and the decoded instruction about to be executed. I’ve added the same kind of logging to a different CHIP-8 emulator, chick-8 by Evan Hanson from the CHICKEN Scheme community. Comparing both of their logs allowed me to quickly spot where they start to diverge, giving me a hint what instruction is faulty.
Looking through the ROM as it is executed isn’t terribly enlightening, it feels like watching through a peephole, not giving you the full picture of what’s about to happen. I started writing a simple disassembler which decodes every two bytes and writes their offset and meaning to a buffer, but stopped working on it after realizing that I have a much more powerful tool at hand to do disassembly and analysis properly: radare2. As it didn’t recognize the format correctly, I only used its most basic featureset for analysis, the hex editor. By displaying the bytes at a width of two per row and searching for hex byte sequences with regex support I was able to find ROMs using specific opcodes easily.
Later after I’ve finished most of the emulator, I started developing a CHIP-8 disassembly and analysis plugin using its Python scripting support. I ran into a few inconsistencies with the documentation, but eventually figured everything out and got pretty disassembly with arrows visualizing the control flow for jumps and calls.
Later I discovered that radare2 actually does have CHIP-8 support in core, you need to enable it explicitly by adding -a chip8 to the command line arguments as it cannot be auto-detected that a file is a CHIP-8 ROM. The disassembly support is decent, but the analysis part had a few omissions and mistakes leading to less nice graphs. By using my Python version as basis I’ve managed improving the C version of the analysis plugin to the same level and even surpassed it as the C API allows adding extra meta-data to individual instructions, such as inline commentary. There is a pending PR for this functionality now, I expect it to be merged soon.
For maximum speed I set up firestarter to recompile the file on each save, added the directory of the project to load-path, then always launched a new Emacs instance from where I loaded up the package and emulated a ROM file. This is ideal if there isn’t much to test, but it’s hard to detect regressions this way. At some point I decided to give the great buttercup library another try and wrote a set of tests exercising every supported instruction with all edge cases I could think of. For each executed test the VM is initialized, some opcodes are loaded up and chip8-cycle is called as often as needed, while testing the state of the registers and other affected parts of the machinery. It was quite a bit of grunt work due to the repetitive nature of the code, but gave me greater confidence in just messing around with the code as retesting everything took less than a second.
Make no mistake here though, excessively testing the complicated parts of a package (I don’t believe it’s worth it testing the simple parts) is in no way a replacement for normal usage of it which can uncover completely different bugs. This is more of a safety net, to make sure code changes don’t break the most basic features.
Retrospectively, this was quite the ride. Normally you’d pick a suitable game or multimedia library and be done, but this is Emacs, no such luxuries here. Where we go we don’t need libraries.
My favorite way of drawing graphics in Emacs is by creating SVG on the fly using the esxml library. This turned out to be prohibitively expensive, not only did it fail meeting the performance goals, it also generated an excessive amount of garbage as trees were recursively walked and thrown away over and over again. A variation of this is having a template string resembling the target SVG, then replacing parts of it and generating an image from them. I attempted doing this, but quickly gave up as it was too bothersome coming up with suitable identifiers and replacing all of them correctly.
I still didn’t want to just drop the SVG idea. Considering this was basically tiled graphics (with each tile being an oversized pixel), I considered creating two SVG images for white and black tiles respectively, then inserting them as if they were characters on each line. The downside of this approach was Emacs’ handling of line height, I couldn’t figure out how to completely suppress it to not have any kind of gaps in the rendering. gamegrid.el somehow solves it, but has rather convoluted code.
At this point I was ready to go back to plain text. I remembered that faces are a thing and used them to paint the background of the text black and white. No more annoying gaps. With this I could finally work and started figuring out how to improve the rendering. While the simple solution of always erasing the buffer contents and reinserting them again did work, there were plenty of optimization possibilities. The most obvious one was using dirty frame tracking to tell if the screen even needed to be redrawn. In other words, the code could set a chip8-fb-dirty-p flag and if the main loop discovered it’s set, it would do a redraw and unset it. Next up was only redrawing the changed parts. For this I’d keep a copy of the current and previous state of the screen around, compare them, repaint the changed bits and transfer the current to the previous state. To change the pixels in the buffer I’d erase them, then insert the correct ones.
The final optimization occurred me much later when implementing the Super CHIP-8 instructions. It was no longer possible to play games smoothly at quadrupled resolution, so I profiled and discovered that erasing text was the bottleneck. I considered the situation hopeless, fiddled around with XBM graphics backed by a bit-vector and had not much luck with getting them to work nearly as well at low resolution. It only occurred me by then that I didn’t try to just change the text properties of existing text instead of replacing text. That fixed all remaining performance issues. Another thing I realized is that anything higher-resolution than this will require extra trickery, maybe even involving C modules.
Garbage Collection Woes
Your code may be fast, your rendering impeccable, but what if every now and then your bouncing letters animation stutters? Congratulations, you’ve run into garbage collection ruining your day. In a language like C it’s much more obvious if you’re about to allocate memory from the heap, in a dynamic language it’s much harder to pin down what’s safe and what’s not. Patterns such as creating new objects on the fly are strictly forbidden, so I tried fairly hard to avoid them, but didn’t completely succeed. After staring hard at the code for a while I found that my code transferring the current to the old screen state was using copy-tree which kept allocating vectors all the time. To avoid this I wrote a memcpy-style function that copied values from one array to another one.
Another sneaky example was the initialization of the emulator state which assigned zero-filled vectors to the variables. I noticed this one only due to the test runner printing running times of tests. Most took a fraction of a millisecond, but every six or so the test took over 10 milliseconds for no obvious reason. This turned out to be garbage collection again. I rediscovered the fillarray function which behaves much like memset in C, used it in initialization (with the vectors assigned at declaration time instead) and the pauses were gone. No guarantees that this was the last of it, but I haven’t been able to observe other pauses.
If your Emacs has been compiled with sound support there will be a play-sound function. Unfortunately it has a big flaw, as long as the sound is playing Emacs will block, so using it is a non-starter. I’ve initially tried using the visual bell (which inverts parts of the screen) as a replacement, then discovered that it does the equivalent of sit-for and calling it repeatedly in a row will in the worst case of no pending user input wait as long as the intervals combined. There was therefore no easy built-in solution to this. To allow users to plug in their own solution I defined two customizable functions defaulting to displaying and clearing a message: chip8-beep-start-function and chip8-beep-stop-function.
The idea here is that given a suitable, asynchronous function you could kick off a beep, then later stop it. Spawning processes is the one thing you can easily do asynchronously, so if you had a way to control a subprocess to start and stop playing a sound file, that would be a good enough solution. I then remembered that mplayer has a slave mode and that mpv improved it in a multitude of ways, so I looked into the easiest way of remote controlling it. It turns out that mpv did away with slave mode in favor of controlling it via FIFO or a socket. To my surprise I actually made it work via FIFO, the full proof of concept can be found in the README.
The CHIP-8 supports two ways of checking user input: Checking whether a key is (not) pressed (non-blocking) and waiting for any key to be pressed (blocking). Doing this in a game library wouldn’t be worth writing about, but this is Emacs after all, there is only a distinction between key up and down for mouse events. After pondering about this issue for a while I decided to fake it by keeping track of when keys have been last pressed in a generic key handler function, then comparing that timestamp against the current time: If it’s below a reasonable timeout, the key is considered pressed, otherwise it isn’t.
Solving the other problem required far more effort. The emulator was at this point sort of a state machine as I’ve tracked whether it was running with a boolean variable to implement a pause command. I’ve reworked the variable and all code using it to be mindful of the current state: Playing, paused or waiting for user input. This way the command merely changed the current state to waiting for input, set a global variable to the register to be filled with the pressed key and set the stage for the generic key handler function to continue execution. If that function detected the waiting state and a valid key has been pressed, it would record it in the respective register and put the emulator into playing state again.
Actually testing this with a keypad demo ROM unveiled a minor bug in the interaction between the main loop and the redrawing logic. Remember that a number of CPU cycles were executed, then a redraw was triggered if needed? Well, imagine that in the middle of the CPU cycles to be executed the state were changed to waiting and the redraw never happened! This would produce an inconsistent screen state, so I changed it to do a repaint immediately. Furthermore, if the state changed to waiting, the loop would still execute more cycles than needed (despite it being a blocking wait), therefore I had to add an extra check in the main loop’s constant amount of cycling whether the state changed and if yes, skeep the loop iteration alltogether.
At this point I was pretty much done with implementing the full CHIP-8 feature set and started playing games like Tetris, Brix and Alien.
Yet I wasn’t satisfied for some strange reason. I probably longed for more distraction and set out to implement the remaining Super CHIP-8 instructions. Unlike the main instruction set these weren’t nearly as well documented. My main resource was a schip.txt file which briefly describes the extra instructions. The most problematic extension is the extended mode which doubles the screen dimensions, requiring a clever way to draw a bigger or smaller screen whenever toggled. There are two ways of implementing such a thing: Drawing to one of two separate screen objects and painting the correct one or alternatively, always drawing to a big screen and rendering in a downscaled mode if needed. For simplicity’s sake I went with the first option.
The extra scroll extensions allow game programmers to efficiently change the viewport (though for some reason they forgot about an instruction scrolling up). My challenge here was to change the screen’s contents in-place, for this to be done correctly extra care was necessary to not accidentally overwrite contents you needed to move elsewhere. The trick here is to iterate in reverse order over the screen lines if necessary.
A few more instructions and optimizations later and I was ready to play the probably silliest arcade game ever conceived, Joust. The sprites in the picture below are supposed to be knights on flying ostrichs trying to push each other down with their lances, but they look more like flying rabbits to me.
Writing an emulator gives you great insight in how a machine actually works. Details like memory mapping you glossed over feels far more intuitive once you have to implement it yourself. One of the downsides is that I didn’t play games for my own enjoyment, but to further improve the emulator and understand the machine.
A few games and demo ROMs revealed bugs in the emulator, such as how to deal with drawing sprites that go outside the boundaries. Cowgod’s Chip-8 Technical Reference tells you to do wrap-around, but Blitz by David Winter seems to think otherwise, when rendered with wrap-around the player sprite collides immediately into a pixel on the edge and the “GAME OVER” screen is displayed. I decided in this case to forego that recommendation and clip the rendering to the screen edges.
It’s not always easy to make such decisions. Some quirks seem fairly reasonable, such as preferrably setting the VF flag to indicate an overflow/underflow condition for arithmetic, although it’s not always specified. Some quirks seem fairly obscure, such as the interpretation of Super CHIP-8 extensions in low-resolution mode: A demo insists that instead of drawing a high-resolution 16 x 16 sprite it should be drawn as 8 x 16 instead. As this doesn’t appear to affect any game and requires significant support code I decided against implementing it. In one case I was conflicted enough between the different interpretation of bit shifting operators that I introduced a customizable to toggle between both, with the incorrect, but popular behavior being the default.
John Borwick was looking for a writing environment that suited him. He’d tried Scrivener and some of the other tools but they didn’t work for him. Then he saw Jay Dixit’s video that I wrote about back in 2015 and decided to adapt Dixit’s solution.
Borwick has an interesting post that describes his system for writing in Emacs. The heart of the system is the use of org-panes to provide three panes: a top-level outline of his document, a detailed outline, and the main pane for the actual writing. He also uses Olivetti, which many writers favor because it increases the margin sizes. He uses a few other convenience packages but the environment centers around the three views of the piece he’s writing.
His solution is, in a sense, the polar opposite of the blank page environment preferred by many writers. In that setup, there is nothing but an empty space that you can put words into. In the extreme case, even the mode line is eliminated. Bastien Guerry wrote a post about how to do this with Emacs. The nice thing is that Emacs can provide either environment–and many others, as well–so you’re covered whatever your preferences.
I do all my writing in Emacs–mostly in Org-mode–and wouldn’t consider using any other tool. It’s easy to adapt it to provide just what I want and if some other tool has a nice feature, it’s usually easy to add it to Emacs.
This looks interesting. I want to take a deeper dive.
I’m at another tipping point with orgmode – I either need to invest a bit more in making it suit me or jump wholesale into a macOS/iOS COTS solution. Seeing post like this makes me want to invest.