With nearly 15 years experience in the IT field and a mind for all things technical, Steven works to improve the quality and security of software applications on a daily basis. He joined RTTS in 2007 and has been working within the Functional and Security divisions providing a host of professional services aimed at overall software quality improvement. Steven works out of the RTTS New York headquarters, has too many hobbies, and will seek to enlighten/inform/entertain with his observations.
Software Testing New Technology: The "Eyes" Have It
Time Magazine has just published an article titled "Are Face-Detection Cameras Racist?" which is pretty interesting. The article centers on face detection technology which is turning up everywhere in the current hardware market, from digital cameras to computer security, and problems with the technology dealing with different racial groups. I am certain that there is no shortage of opinions as to why this occurred, but I have my own to share here.
After reading the article, I have to admit that it's nice to see a technology story that gets to the heart of the issue; that adequate testing was probably not conducted on this feature before moving it to market. Testing for this feature was probably relegated to the end of the production cycle, so it took the hit when time or resources became an issue. Since this is a hot, new feature, there is the inevitable urge to get the technology into the product and to get it into the hands of the consumer ahead of the competition. That is a pretty basic idea and there is always a certain amount of risk present in that situation, but at this point in history, when the internet makes it possible to register consumer disgust on a global scale in minutes, it may be worthwhile for a company to reevaluate how to mitigate that risk.
One way to correct this is with a well-planned and dynamic testing effort that is started well ahead of the completion of the feature, shortly after (or possibly during) the initial planning phase. Moving away from the Waterfall Model may still be difficult or hard to grasp for some companies, but if you are looking for solid evidence that indicates why this model is flawed, look no further. Moving the testing phase further up in the development cycle (in addition to making it an iterative, traceable process) enables defects to be located earlier in the development lifecycle and rectified at a smaller cost. Organizations that choose to do so may be able to improve overall quality, cut costs, and save face…no matter what racial makeup that face may consist of.
Recently my mother in-law to be decided to inform me of a new security exploit. As I listened to her and gently reminded her that my professional background might cover something of this nature, I began to think. I thought about that fact that here was a woman that did not have any involvement in the field of IT security and may have requested that I “put the internet back on [her] desktop” one or more times. All said, she is a wonderful woman and, as we all know, the domain of application/internet users is varied (not everyone can be a superuser). The point of my bringing up this story is related to the release of software issues and their resolution.
With the Black Hat Conference and Def Con (two of the most famous hacker/security conferences) having recently convened, security issues ranging from SSL exploits to high-security lock breaking have made it into the mainstream media. And in reality, the release of security exploits and vulnerabilities to a public audience can be the primary goal of hackers (notoriety, financial gain, and the simple joy of accomplishment are some other goals). “Wide-banding” (a term taken from a William Gibson story meaning to broadcast information to anyone and everyone that will listen with the goal of giving everyone ownership instead of just a privileged few) a security hole can sometimes be a very efficient way to get it fixed and let users know what risks they may be incurring.
So, since my mother in-law and hackers are two mutually exclusive groups, what is the point here? The point is that this model for defect reporting may have some promise, not only for security issues, but for other areas of software testing. Letting all members of the software development team know about issues and defects can be helpful:
More attention is paid to well-known issues
More ideas for resolution can be solicited from the group – think of the Kaizen and Hansei elements of the Toyota Production System which speak to improvement in the process and the organization
Increased sense of project ownership for all team members
Allows for the creation of contingency plans – since not every problem can be avoided, planning what to do when trouble strikes can be crucial
Easier risk assessment – problems can be overemphasized/underemphasized by individuals that don’t fully understand them. Getting more eyes on the problem can also mean getting the right set of eyes on it.
To clarify, I am not suggesting that security issues or defects found in a project should be broadcast to the public or even every person in the office. That is not always a good use of resources and would frequently violate a slew of corporate confidentiality agreements. Instead I am advocating a “broadcast of issues” to the entire project team. Getting the word out allows the expertise and resources of several individuals on the project to be drawn upon for a solution. Utilizing this “team of resources” can frequently speed the resolution of issues and consequently add up to real savings in many areas of the project (opportunity cost savings, monetary, time).
Part I: Application Security, Milgram, and Confidence
For my first blog entry, I thought about discussing automated testing, security software, or even the recent scene-stealing security breach of the week.In the end, I thought about the importance of a solid foundation and what was almost always the first chapter/lesson that I encountered when I was getting started with security and learning about hackers; social engineering.
If the term is unfamiliar, social engineering is the act of manipulating people to obtain a reward.Social engineering is basically a con (short for confidence game) which is set up in order to gain the victim’s confidence and capitalize on it.The real problem we see with social engineering is that a good con can sometimes render application safeguards useless when it comes to protecting an application.In short, it doesn’t matter how thick the castle walls are if your own people open the gate for the pretty horse chock full of men with swords.
There are several human traits that factor into social engineering, but I only want to focus on one of them in this posting and I will talk about another in my next posting (along with my thoughts on solutions – teaser, teaser, teaser!)For now, I want to discuss how obedience to authority can pose a problem.There are numerous experiments that demonstrate the lengths that obedience can be taken to, but my favorite is that of Dr. Stanley Milgram who showed us that sometimes a look of authority (and a white lab coat) is all that is needed to coerce people to do something they may not want to.A good social engineer can take this implicit obedience and use it against a victim and/or the company that they work for.They may not demand that the victim apply electrical current to another person, but they may use this obedience to gain access or acquire sensitive information.
Case in point; I used to work in a position that had me visiting several locations a day in order to perform on-site IT work.At these sites, I would almost always work in a wiring closet, secure area, or server room that was supposed to be protected and would require the help of some gatekeeper to access it…who was almost always nowhere to be found.Consequently, I found that instead of playing hide and seek with someone who was just going to grant me the access I needed after a spirited 45 minute chase through an office complex, a simpler and more effective method could be used.I learned that most people would not question me if I strolled through to the target area armed only with a look on my face that said “of course I know where I’m going”.I found I could even pull this off with an improved posture – shoulders back, back straight, head held high – and people would assume I belonged there, no questions asked.It was simply amazing to see where I could go if I walked around like I owned the place.
So in closing, think about how your organization’s security could be susceptible to someone posing as something they are not.Ask yourself how you might engineer exploits like this and think about the ways that they can be stopped.Next time, I will share another facet of social engineering with you and attempt to make suggestions that might help improve the security of your application.