Mailinglist Archive


[Fwd: SD Secure Start: Keep It Simple]
From:Joe Klemmer
Date: Mon, 10 Jan 2005 08:23:13 -0500

-------- Original Message --------
Subject: 	SD Secure Start: Keep It Simple
Date: 	Wed, 22 Dec 2004 09:00:00 -0800
From: 	Software Development Magazine 

To: 	joe.klemmer@us.army.mil



SECURE START         
December 2004

This monthly e-mail newsletter is a free service 
for Software Development and SD Online subscribers. If you 
do not wish to get this newsletter, please follow the 
unsubscribe directions at the bottom of this message. 

In this Issue:
--> Keep It Simple
--> Principles to Build By
--> What Goes Wrong

***********************************************************
 
>>PRINCIPLE #6: Keep It Simple 
Complex systems may include subtle problems that might go unnoticed. 
Complex code is hard to maintain and tends to be buggy. To shore up 
security, aim for simplicity. 

The KISS mantra is pervasive -- "Keep It Simple, Stupid!" -- and it applies 
just as well to security as it does everywhere else. Complexity increases the risk 
of trouble -- avoid complexity; avoid problems.

Software design and implementation should be as straightforward as possible. 
Complex design is never easy to understand, and is therefore more likely to 
include subtle problems that will be missed during analysis. Complex code 
tends to be harder to maintain, as well. And most importantly, complex 
software tends to be far more buggy -- no surprise.

Consider reusing components whenever possible, as long as the components 
to be reused are of good quality. The more successful use that a particular 
component has seen, the more intent you should be on not having to 
rewrite it. This consideration holds true particularly for cryptographic libraries. 
Why would anyone want to re-implement AES or SHA-1 when several widely 
used libraries are available? A well-used library is more likely to be robust than 
one put together in-house, since people are more likely to have noticed 
implementation problems. Furthermore, subtle implementation flaws may 
not be readily apparent if both ends are using the same library. Trying to get 
different implementations of an algorithm to interoperate tends to weed out 
more problems. Experience builds assurance, especially when those experiences 
are positive. Of course, problems can exist even in widely used components, but 
it’s reasonable to suspect that less risk is involved in the known quantity, all 
other things equal. 

It also stands to reason that adding bells and whistles tends to violate the 
simplicity principle. But what if the bells and whistles in question are security 
features? When we discussed defense in depth, we said that we wanted 
redundancy. Here, we seem to be arguing the opposite. We previously said, 
"Don’t put all your eggs in one basket." Now we’re saying, "Be wary of having 
multiple baskets." Both notions make sense, even though they’re apparently at 
odds. 

The key to unraveling this paradox is to strike the right balance for each 
project. When you’re adding redundant features, the idea is to improve the 
apparent security of the system. Once sufficient redundancy has been added 
to address whatever security level is desired, further redundancy is unnecessary. 
In practice, a second layer of defense is usually a good idea, but a third layer 
should be carefully considered.

Simplicity can often be improved by funneling all security-critical operations 
through a small number of choke points in a system -- small, easily controlled 
interfaces through which control must pass. This is one way to avoid spreading 
security code throughout a system. In addition, it’s far easier to monitor user 
behavior and input if all users are forced into a few small channels. That’s 
the idea behind having only a few entrances at sports stadiums; if there were 
too many entrances, collecting tickets would be more difficult, and more staff 
would be required to do the same job.

Usability is another not-so-obvious but vital aspect of simplicity. Those who 
need to use a system should be able to get the best security it has to offer easily, 
and shouldn’t be able to introduce insecurities without careful deliberation. 
Usability applies both to the people who use a program and to those who 
maintain its code base or program against its API.

Keeping it simple is important in many domains, especially security. 

-- Gary McGraw and John Viega

***********************************************************

>>PRINCIPLES TO BUILD BY 

Our ten tips for safer software.

1. Secure the weakest link
2. Practice defense in depth
3. Fail securely
4. Allow least privilege
5. Compartmentalize
==> 6. Keep it simple
7. Promote privacy
8. Know that hiding is hard
9. Trust reluctantly
10. Use community resources 

-- GM and JV 


***********************************************************

>>What Goes Wrong
Security risks can remain hidden due to a system’s complexity. 

By their very nature, complex systems introduce multiple risks -- and 
almost all systems that involve software are complex. One risk is that 
malicious functionality can be added to a system (either during creation 
or afterward) that extends it past its intended design. As an unfortunate 
side effect, inherent complexity lets malicious and flawed subsystems 
remain invisible to unsuspecting users until it’s too late. This is one 
of the root causes of the malicious code problem. Another risk, more 
relevant to our purposes, is that a system’s complexity makes it hard to 
understand, hard to analyze and hard to secure. Security is difficult to 
get right even in simple systems; complex systems serve only to make 
security more difficult. Security risks can remain hidden in the 
jungle of complexity, not coming to light until these areas have 
been exploited.

A desktop system running Windows/XP and associated applications depends 
upon the proper functioning of the kernel as well as the applications to ensure 
that vulnerabilities can’t compromise the system. However, XP itself consists 
of at least 40 million lines of code, and end-user applications are becoming 
equally, if not more, complex. When systems become this large, bugs can’t 
be avoided.

The complexity problem is exacerbated by the use of unsafe programming 
languages (for example, C or C++) that don’t protect against simple kinds 
of attacks, such as buffer overflows. In theory, we can analyze and prove 
that a small program is free of problems, but this task is impossible for even 
the simplest desktop systems today, much less the enterprise-wide systems 
employed by businesses or governments. 

-- GM and JV

----------------------------------------------------------
These articles originally appeared in the May 2003 
issue of Software Development. Read more at 
http://click.sd.email-publisher.com/maacY2babcCOMchbUaQb/


***************************************************************

FREE SOFTWARE DEVELOPMENT SUBSCRIPTION 

(U.S. Residents Only)
If you don't already receive the print version of Software
Development magazine, subscribe now. It's free for qualified 
individuals.
http://click.sd.email-publisher.com/maacY2babcCOOchbUaQb/

ADVERTISING INFORMATION
For more information on advertising in Software 
Development newsletters, contact our Web Sales Managers:
East: Andrew Mintz, (978) 897-3035, amintz@cmp.com
West: Erin Rhea, (415) 947-6189, erhea@cmp.com    

FEEDBACK AND PROBLEMS
Send letters to the editor to aweber@cmp.com.
Send technical questions or problems to 
webmaster@sdmagazine.com.

THE SECURE START NEWSLETTER is a monthly newsletter brought to 
you by CMP Media LLC, publisher of Software Development 
magazine.

600 Harrison Street, 6th floor
San Francisco, CA 94107
Copyright 2004 CMP Media LLC


====================================================================
Update Your Profile:
   http://securestart.f.topica.com/f/?a84Hg7.chbUaQ.am9lLmts
Unsubscribe:
   http://securestart.f.topica.com/f/?a84Hg7.chbUaQ.am9lLmts.u
Confirm Your Subscription:
   http://securestart.f.topica.com/f/?a84Hg7.chbUaQ.am9lLmts.c