The basis of language security is starting from a programming language with a well-defined, easy-to-understand semantics. From there you can prove (formally or informally) interesting security properties about particular programs. For example, if a program has a secret k, but some untrusted subcomponent C of it should not have access to k, one can prove if k can or cannot leak to C. This approach is taken, for example, by Google’s Caja compiler to isolate components from each other, even when they run in the context of the same web page.
But the Spectre and Meltdown attacks have seriously set back this endeavor.
I suggest reading the post to get the full take.
Some of my time is spent talking with clients about secure development life cycle practices and tools to help bolster security early in the process. I’ve abstractly reflected on how I was taught/learned to code using what is referred to as the Unix approach – small, well understood, behaviorally consistent components brought together to make a more complex system.
This was in the days before these large package management systems.
Now think about this: that was a software-based issue that, while hugely impactful, was easy to fix (select 11 lines of code, copy, paste). What happens when hardware isn’t behaviorally consistent or is so fundamentally flawed its insecurity isn’t fixable?
Taking me back even further I’m reminded of the various Intel floating point issues of the 80’s and 90’s.
I drifted off topic.
What are your thoughts?