If you have been living in a cave or an island or a mountainside somewhere and have not heard, the new weapon of choice is cyber-security based. Attacking a company by wiping out it's databases and computer files and/or spreading a virus and/or creating malware seems to be the most lethal and fearless method these days, just ask Sony, or Target, or dozens of other companies that have had the privacy of their computers and networks violated, and then wiped out. The Internet has brought us a great technological wave of invention, interconnection and technological advancement. With that progress, a great new security risk has been exposed. Networks used to be like castles. They were self contained with huge virtual walls that one could not get through. As the castles disappeared allowing great clans to become countries, the networks have evolved into inter-networks. However, just like the castles, where security was high, once the walls were no longer used, security became an issue.
We hear so much in the news about how these attacks are deployed, through malware and viruses and keyclicks. To combat these issues there is a huge effort of counter technologies: Virus Scanners, Malware Detectors, generating huge profits to combat the security breaching tools. However, these are, for the most part, an afterthought. In other words, they provide a defense to an already deployed method of attack. They cannot predict a future attack, nor can they detect what they do not know. Does this make them useless? Absolutely not. Use them with great abandon.
What we do not hear is the root cause of these security related issues. The more I talk to people about it, the more they appear convinced that it is simply the nature of the beast. Like life without castles, you develop a police force that does a great job of minimizing the impact, you pay for it as a community and every now and then, someone's home or business if going to be robbed. Hopefully we will respond quickly, identify and punish the perpetrator, and continue onward. But what if there is a root cause? What if we put aside the complacency of acceptance, and looked a little deeper? What if we go back to the roots of computing and networking and critically analyze why there is so much insecurity?
To answer these questions, you ultimately end up staring right at the software. This is the software that runs computers, switches, routers, and anything that has a processor. The cyberattacks all have something in common - they change the behavior of the software to perform improper tasks. The person who writes the software says: "That's not my fault, my software works perfectly! If you modify it, that isn't my fault!" Which begs the question, well how the heck does your software allow itself to be modified? The answer is usually a shoulder shrug, or a finger pointing in any direction other than at the software author themselves.
Considering this, I had an epiphany. Almost all the software, the great software, is written today by degreed software engineers. Arguably this was not the case for Bill Gates who never got a degree, but most of his software is no longer used. The problems Mr. Gates' software solved were considerably simpler that the software being written today. Networking and Computer problems are extremely intricate and complex and sophisticated. Solving these problems requires large teams of well prepared software engineers that often span multiple countries. There are great universities around the globe producing great engineers who are able to tackle these immensely difficult and complex problems. Learning curves are steep, and delivery timeframes are small. Innovation rates are expected to be exponential. So why the problems with security? Could it be that the engineers themselves are not prepared to write secure software?
I began to ask. I often get to meet teams of software engineers, many of whom are recent graduates from legendary software engineering institutions. How many of you took some sort of secure coding class as part of your degree program, I ask? I have never written down the actual numbers but from memory and rule of thumb I will tell you that one or two out of 100 will raise their hand. That, to me is an alarm bell - a red flag that needs to be run straight up a pole. So I did. I contacted acquaintances in the institutions and asked them. Shouldn't I be getting 80 or 90 hands out of 100? The general response was the same: we are naturally concerned with this, but the cyberattacks are always changing and whatever we teach, the people who compile and write these bad peices of software will always find a way around perfectly good code.
I don't buy that answer as acceptable. To the contrary, I would expect these great institutions to hire some of the best cybersecurity people to come in and teach classes on how their code works, how they find the loopholes in the coding design, and have projects as a mandatory part of a software degree program that create code and them prove that the code is un-hackable. I know what you are thinking, which cybersecurity people would want to do this? I concur that there would be few, but those few could make a big difference - a much bigger difference than signing treaties.
With the ferocity of forward movement and innovation demand comes some amount of sloppiness in software development. This sloppiness results in enormous code output, where libraries and subroutines are piled into the software storehouses in order to speed the development. Think of it as chefs storing all the possible ingredients in a huge pantry so that they can produce a certain dish. Some of those ingredients will never be touched, but just in case, they are in the pantry. The same goes for software. No one really cares, if once compiled, it fits onto your enormous disk and memory, and it does what is necessary. One can almost envisage the cyberattacker begins drooling slightly, knowing somewhere in that immense pantry of software are weaknesses and vulnerabilities that the software engineers never even tested.
One of the possible roots of this time of vulnerability in our computers, networks, and applications points directly back to the creators of the creators of the software. We must immediately change the degree programs to incorporate software secure coding and testing practices, while remaining flexible and mindful of the continual improvement curve that must be incorporated into these programs. The way to immediately impact that change is that software companies must demand these skills by rote, and they must provide for immediate training of those already employed to fill the gap of knowledge. Like the Internet itself, and society after castles, a self policing system must be invoked, and this is only attainable through advancing knowledge and skills so that weaknesses and vulnerabilities cannot be so easily deployed and executed on computers, networks and applications.