Software security building security in pdf free download
Early on, Microsoft put into place the Microsoft-centric software security process shown here. Notice that security does not happen at one lifecycle stage; nor are constituent activities "fire and forget. All rights reserved.
Reprinted with permission. An updated view of Microsoft's software security process. In an e-mail sent to all Microsoft employees in January and widely distributed on the Internet see , Microsoft Chairman Bill Gates started a major shift at Microsoft away from a focus on features to building more secure and trustworthy software.
The e-mail is reproduced in its entirety here. Two years ago, it was the kickoff of our. NET strategy. Before that, it was several memos about the importance of the Internet to our future and the ways we could make the Internet truly useful for people. Over the last year it has become clear that ensuring.
NET as a platform for Trustworthy Computing is more important than any other part of our work. If we don't do this, people simply won't be willing—or able—to take advantage of all the other great work we do.
Trustworthy Computing is the highest priority for all the work we are doing. We must lead the industry to a whole new level of Trustworthiness in computing. When we started work on Microsoft. NET more than two years ago, we set a new direction for the company—and articulated a new way to think about our software. Rather than developing standalone applications and Web sites, today we're moving towards smart clients with rich user interfaces interacting with 50 Web services.
We're driving the XML Web services standards so that systems from all vendors can share information, while working to make Windows the best client and server for this new era. There is a lot of excitement about what this architecture makes possible.
It allows the dreams about e-business that have been hyped over the last few years to become a reality. It enables people to collaborate in new ways, including how they read, communicate, share annotations, analyze information and meet.
However, even more important than any of these new capabilities is the fact that it is designed from the ground up to deliver Trustworthy Computing. What I mean by this is that customers will always be able to rely on these systems to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony. Today, in the developed world, we do not worry about electricity and water services being available.
With telephony, we rely both on its availability and its security for conducting highly confidential business transactions without worrying that information about who we call or what we say will be compromised.
Computing falls well short of this, ranging from the individual user who isn't willing to add a new application because it might destabilize their system, to a corporation that moves slowly to embrace e-business because today's platforms don't make the grade. The events of last year—from September's terrorist attacks to a number of malicious and highly publicized computer viruses—reminded every one of us how important it is to ensure the integrity and security of our critical infrastructure, whether it's the airlines or computer systems.
Computing is already an important part of many people's lives. Within ten years, it will be an integral and indispensable part of almost everything we do. Microsoft and the computer industry will only succeed in that world if CIOs, consumers and everyone else see that Microsoft has created a platform for Trustworthy Computing. Every week there are reports of newly discovered security problems in all kinds of software, from individual applications and services to Windows, Linux, Unix and other platforms.
We have done a great job of having teams work around the clock to deliver security fixes for any problems that arise. Our responsiveness has been unmatched—but as an industry leader we can and must do better. Our new design approaches need to dramatically reduce the number of such issues that come up in the software that Microsoft, its partners and its customers create. We need to make it automatic for customers to get the benefits of these fixes.
Eventually, our software should be so fundamentally secure that customers 51 never even worry about it. No Trustworthy Computing platform exists today. It is only in the context of the basic redesign we have done around. NET that we can achieve this. The key design decisions we made around. NET include the advances we need to deliver on this vision. Visual Studio.
NET is the first multi-language tool that is optimized for the creation of secure code, so it is a key foundation element. I've spent the past few months working with Craig Mundie's group and others across the company to define what achieving Trustworthy Computing will entail, and to focus our efforts on building trust into every one of our products and services.
Key aspects include: Availability: Our products should always be available when our customers need them. System outages should become a thing of the past because of a software architecture that supports redundancy and automatic recovery.
Self-management should allow for service resumption without user intervention in almost every case. Security: The data our software and services store on behalf of our customers should be protected from harm and used or modified only in appropriate ways. Security models should be easy for developers to understand and build into their applications. Privacy: Users should be in control of how their data is used.
Policies for information use should be clear to the user. Users should be in control of when and if they receive information to make best use of their time. It should be easy for users to specify appropriate use of their information including controlling the use of email they send.
Trustworthiness is a much broader concept than security, and winning our customers' trust involves more than just fixing bugs and achieving "five-nines" availability. It's a fundamental challenge that spans the entire computing ecosystem, from individual chips all the way to global Internet services. It's about smart software, services and industry-wide cooperation. There are many changes Microsoft needs to make as a company to ensure and keep our customers' trust at every level—from the way we develop software, to our support efforts, to our operational and business practices.
As software has become ever more complex, interdependent and interconnected, our reputation as a company has in turn become more vulnerable. Flaws in a single Microsoft product, service or policy not only affect the quality of our platform and services overall, but also our customers' view of us as a company.
In recent months, we've stepped up programs and services that help us create better software and increase security for our customers.
NET Server secure by default, and educating our customers on how to get—and stay—secure. The error-reporting features built into Office XP and Windows XP are giving us a clear view of how to raise the level of reliability. The Office team is focused on training and processes that will anticipate and prevent security problems.
In December, the Visual Studio. NET team conducted a comprehensive review of every aspect of their product for potential security issues. We will be conducting similarly intensive reviews in the Windows division and throughout the company in the coming months. At the same time, we're in the process of training all our developers in the latest secure coding techniques.
We've also published books like Writing Secure Code, by Michael Howard and David LeBlanc, which gives all developers the tools they need to build secure software from the ground up. In addition, we must have even more highly trained sales, service and support people, along with offerings such as security assessments and broad security solutions.
I encourage everyone at Microsoft to look at what we've done so far and think about how they can contribute. But we need to go much further. In the past, we've made our software and services more compelling for users by adding new features and functionality, and by making our platform richly extensible.
We've done a terrific job at that, but all those great features won't matter unless customers trust our software. So now, when we face a choice between adding features and resolving security issues, we need to choose security. Our products should emphasize security right out of the box, and we must constantly refine and improve that security as threats evolve.
A good example of this is the changes we made in Outlook to avoid email borne viruses. If we discover a risk that a feature could compromise someone's privacy, that problem gets solved first. If there is any way we can better protect important data and minimize downtime, we should focus on this. These principles should apply at every stage of the development cycle of every kind of software we create, from operating systems and desktop applications to global Web services.
Going forward, we must develop technologies and policies that help businesses better manage ever larger networks of PCs, servers and other intelligent devices, knowing that their critical business systems are safe from harm. Systems will have to become self-managing and inherently resilient.
We need to prepare now for the kind of software that will make this happen, and we must be the kind of company that people can rely on to deliver it. This priority touches on all the software work we do. By delivering on Trustworthy Computing, customers will get dramatically more value out of our advances than they have in the past. The challenge here is one that Microsoft is 53 uniquely suited to solve.
Note that software security touchpoints can be applied regardless of the base software process being followed. Software development processes as diverse as the waterfall model, Rational Unified Process RUP , eXtreme Programming XP , Agile, spiral development, Capability Maturity Model integration CMMi , and any number of other processes involve the creation of a common set of software artifacts the most common artifact being code.
You already know how to build software; what you may need to learn is how to build secure software. The artifacts I will focus on and describe best practices for include requirements and use cases, architecture, design documents, test plans, code, test results, and feedback from the field. Most software processes describe the creation of these kinds of artifacts.
In order to avoid the "religious warfare" surrounding which particular software development process is best, I introduce this notion of artifact and artifact analysis.
The basic idea is to describe a number of microprocesses touchpoints or best practices that can be applied inline regardless of your core software process. This process-agnostic approach to the problem makes the software security material explained in this book as easy as possible to adopt.
This is particularly critical given the fractional state of software process adoption in the world. Requiring that an organization give up, say, XP and adopt RUP in order to think about software security is ludicrous.
The good news is that my move toward process agnosticism seems to work out. I consider the problem of how to adopt these best practices for any particular software methodology beyond the scope of this book but work that definitely needs to be done.
Pillar III: Knowledge 54 One of the critical challenges facing software security is the dearth of experienced practitioners. Early approaches that rely solely on apprenticeship as a method of propagation will not scale quickly enough to address the burgeoning problem. As the field evolves and best practices are established, knowledge management and training play a central role in encapsulating and spreading the emerging discipline more efficiently.
Pillar III involves gathering, encapsulating, and sharing security knowledge that can be used to provide a solid foundation for software security practices. Knowledge is more than simply a list of things we know or a collection of facts. Information and knowledge aren't the same thing, and it is important to understand the difference. Knowledge is information in context—information put to work using processes and procedures.
Software security knowledge can be organized into seven knowledge catalogs principles, guidelines, rules, vulnerabilities, exploits, attack patterns, and historical risks that are in turn grouped into three knowledge categories prescriptive knowledge, diagnostic knowledge, and historical knowledge.
Two of these seven catalogs—vulnerabilities and exploits—are likely to be familiar to software developers possessing only a passing familiarity with software security. These catalogs have been in common use for quite some time and have even resulted in collection and cataloging efforts serving the security community.
Similarly, principles stemming from the seminal work of Saltzer and Schroeder [] and rules identified and captured in static analysis tools such as ITS4 [Viega et al. Knowledge catalogs only more recently identified include guidelines often built into prescriptive frameworks for technologies such as.
Together, these various knowledge catalogs provide a basic foundation for a unified knowledge architecture supporting software security. Software security knowledge can be successfully applied at various stages throughout the entire SDLC. One effective way to apply such knowledge is through the use of software security touchpoints.
For example, rules are extremely useful for static analysis and code review activities. Figure shows an enhanced version of the software security touchpoints diagram introduced in Figure In Figure , I identify those activities and artifacts most clearly impacted by the knowledge catalogs briefly mentioned above.
More information about these catalogs can be found in Chapter Mapping of software security knowledge catalogs to various software artifacts and software security best practices.
However, the most important audience has in some sense experienced the least exposure—for the most part, software architects, developers, and testers remain blithely unaware of the problem. One obvious way to spread software security knowledge is to train software development staff on critical software security issues.
The most effective form of training begins with a description of the problem and demonstrates its impact and importance. During the Windows security push in February and March , Microsoft provided basic awareness training to all of its developers.
Many other organizations have ongoing software security awareness training programs. Beyond awareness, more advanced software security training should offer coverage of security engineering, design principles and guidelines, implementation risks, design flaws, analysis techniques, and security testing.
Special tracks should be made available to quality assurance personnel, especially those who carry out testing. Of course, the best training programs will offer extensive and detailed coverage of the touchpoints covered in this book. Putting the touchpoints into practice requires cultural change, and that means training.
Assembling a complete software security program at the enterprise level is the subject of Chapter The good news is that the three pillars of software security—risk management, touchpoints, and knowledge—can be applied in a sensible, evolutionary manner no matter what your existing software development approach is The Rise of Security Engineering 56 Designers of modern systems must take security into account proactively.
This is especially true when it comes to software because bad software lies at the heart of a majority of computer security problems. Software defects come in two flavors—designlevel flaws and implementation bugs. To address both kinds of defects, we must build better software and design more secure systems from the ground up. Most computer security practitioners today are operations people. They are adept at designing reasonable network architectures, provisioning firewalls, and keeping networks up.
Unfortunately, many operations people have only the most rudimentary understanding of software. This leads to the adoption of weak reactive technologies think "application security testing" tools. Tools like those target the right problem software with the wrong solution outside in testing.
Fortunately, things are beginning to change in security. Practitioners understand that software security is something we need to work hard on. The notion that it is much cheaper to prevent than to repair helps to justify investment up front.
In the end, prevention technology and assurance best practices may be the only way to go. Microsoft's Trustworthy Computing Initiative is no accident. If we are to build systems that can be properly operated, we must involve the builders of systems in security.
This starts with education, where security remains an oftenunmentioned specialty, especially in the software arena. Every modern security department needs to think seriously about security engineering. The best departments already have staff devoted to software security. Others are beginning to look at the problem of security engineering. At the very least, close collaboration with the "builders" in your organization is a necessity.
Don't forget that software security is not just about building security functionality and integrating security features! The question to ask in response is, "What attacks would have serious impact and are worth avoiding for this module? Software Security Is Everyone's Job Connectivity and distributed computation is so pervasive that the only way to begin to secure our computing infrastructure is to enlist everyone. Operations people must continue to architect reasonable networks, defend them, and keep them up.
Administrators must understand the distributed nature of modern systems and begin to practice the principle of least privilege. Witness the rise of Firefox. Users must also understand that they are the last bastion of defense in any security design and that they need to make tradeoffs for better security. Executives must understand how early investment in security design and security analysis affects the degree to which users will trust their products.
The most important people to enlist for near-term progress in computer security are the builders. Only by pushing past the standard-issue operations view of security will we begin to make systems that can stand up under attack 58 Chapter 2. No noble thing can be done without risks. However, nomenclature remains a persistent problem in the security community.
The idea of risk management as a key tenet of security, though pervasive and oft repeated, is presented under a number of different rubrics in software security, attached to particular processes, such as "threat modeling" and "risk analysis," as well as to larger-scale activities such as "security analysis. By teasing apart architectural risk analysis one of the critical software security touchpoints described later in the book and an overall risk management framework RMF, described here , we can begin to make more sense of software security risk.
An RMF is at its heart a philosophy for software security. Following the RMF is by definition a full lifecycle activity, no matter whether you're working on a little project or a huge corporate application strategy. The key to reasonable risk management is to identify and keep track of risks over time as a software project unfolds. As touchpoints are applied and risks are uncovered, for example, an RMF allows us to track them and display information about status.
For the purposes of this chapter, consider risk management as a high-level approach to iterative risk management that is deeply integrated throughout the software development lifecycle SDLC and unfolds over time. The basic idea is simple: identify, rank, track, and understand software security risk as it changes over time. What follows in this chapter is a detailed explanation of a mature RMF used at Cigital. This chapter may be a bit heavy for some. If you're more interested in specific best practices for software security, you should skip ahead to Part II.
If you do skip ahead, make sure you cycle back around later in order to understand how the framework described here supports all of the best practices. Digital Technology and the Practices of Humanities Research. How does technology impact research practices in the humanities? How does digitisation shape scholarly identity?
How do we negotiate trust in the digital realm? What is scholarship, what forms can it take, and how does it acquire authority? Beginning where the best-selling book Building Secure Software left off, Software Security teaches you how to put software security into practice. The software security best practices, or touchpoints, described in this book have their basis in good software engineering and involve explicitly pondering security throughout the software development lifecycle.
This means knowing and understanding common risks including implementation bugs and architectural flaws , designing for security, and subjecting all software artifacts to thorough, objective risk analyses and testing. Avast Free Security. WhatsApp Messenger. Talking Tom Cat. Clash of Clans. Subway Surfers. TubeMate 3. Google Play. Log4j software bug. NASA probe touches the sun. Spider-Man: No Way Home review. Abuse Cases 6. Security Requirements 7.
External Analysis Why Only Seven? Software Security Grows Up Appendices A. Introducing the Audit Workbench Exercises for the Reader 2. Ensuring a Working Build Environment 4. Using the Audit Workbench Exercises for the Reader 9.
0コメント