A lot of attention is being focused on cryptography and other security controls being manipulated by attackers who are exploiting poor implementations, lack of maintenance and seemingly unforseen omissions in the controls coding. This can lead to a trusted security control being turned into a weapon of choice. I overheard a group of people discussing software controls testing and the various methods being used, commercial code analysis tools, in-house scripts and test packs and also the ingenuity of pen testing. After a while, the conversation turned to the testing of logic based attacks on security controls and it seemed to be agreed there was nothing you could really do to get ahead of these sophisticated attacks.
The thing that makes coding and implementing security controls more difficult than the fairly straightforward sales workflow, business logic or simple messaging protocols and functions is that security controls and protocols are usually software-based implementations of a serious, well-defined mathematical algorithm. A lot of coders out there get into the software game with some inspiring courseware and long nights slaving over an integrated development environment. It’s relatively OK to use some snappy libraries and glue it all together with a bit of java or python for simple coding, like business logic, user interfaces and API workflow.
However, to work with security algorithms, you need skills in serious mathematics, an in-depth understanding of the algorithm and an in-depth knowledge of how to safely implement the controls within the software environment you are creating. Simply grabbing functions from a library and wiring them into your code can lead to disaster and you need to take a scientific approach. I believe we can test security controls adequately to limit or even avoid logic based attacks, but it’s not easy. Simply creating use case based testing will only examine the security controls functions in the known flow of user and system interaction with the control. The full extent of the underlying logic of the control may never be tested if we only used this approach. My point is many are only doing use case based testing.
A security control or protocols algorithm can be formally understood through formal semantics (a way of describing how the algorithm truly works in words and not symbols) to create proofs. The semantic proof of a security control should extend to your implementation surrounding the control. To do this you will need to create a formal view of what you are doing; as I said, it’s not easy, but you are playing with fairly complicated stuff. A problem is that a set of standardised test cases based on the formal proof for each security control and protocol should be produced and released with the code itself. We shouldn’t simply rely on another coder eyeballing changes; although useful, this can be dangerous, leading to concerns like Heartbleed.
Failing to understand the control correctly will lead you to understanding failure. From the broader modelling and proofs, you create test cases to reflect the functions of the control itself within the overall project and go beyond just how the users of the software interact with the control.