SDM > Practices
Process impact: This document defines the practices the team
follows in our software development methodology. Each practice has a set of
defined goals and rules. Documenting this aspect of the process
helps identify tasks that must be done during development.
The explicit list of goals and rules helps team members to
understand what they must do.
Overview of Practices
We have defined our software development process by choosing
practices from the software engineering literature, refining it to
fit our organization, and adding our own practices based on our
experiences.
The practices are organized by source:
Please note that the descriptions below are only inspired
by the named sources.
TODO: Read the list of practices below and consider how each
relates to your software development process. Adjust the
definitions to fit your current process. Discuss these practice
definitions with the relevant team members. Use the list of practices
as a checklist when planning project tasks. Refine this document
over time to help improve your process.
Our Organizations Development Practices
PRACTICE-NAME
Description: |
DESCRIPTION.
|
Goals: |
|
Rules: |
|
PRACTICE-NAME
Description: |
DESCRIPTION.
|
Goals: |
|
Rules: |
|
PRACTICE-NAME
Description: |
DESCRIPTION.
|
Goals: |
|
Rules: |
|
PRACTICE-NAME
Description: |
DESCRIPTION.
|
Goals: |
|
Rules: |
|
Selected CMMI-SW 1.1 Level 2 Process Areas
Requirements Management
Description: |
We take the software requirements specification seriously.
The SRS is a central document: it determines exactly what we
will build and what we will verify in QA. We have specific
procedures in place for evaluating and responding to requests
for changes to the SRS.
|
Goals: |
- Complete and correct specifications of the desired system
- Reduce effort wasted on building incorrect requirements
- Allow controlled requirements changes when needed
- Organize project tasks around requirements
- A culture of controlled change
|
Rules: |
- We seek out and work with project stakeholders
- All requirements from all sources are captured in the user
needs document and then addressed in the SRS
- We never design or implement features that are not in the
SRS
- We keep the SRS up-to-date, and under version control
- Management controls changes to the SRS based on business
needs and impact on the project
- Every specified feature must be justified with at least one
use case
- The SRS is kept on the project website where
any team member can access it
|
Project Planning
Description: |
We take project planning seriously. We plan our work before
doing it, and make conscience decisions based on the
plan.
|
Goals: |
- A detailed plan that guides day-to-day effort
- Accurate estimates that enable informed planning decisions
- Flexibility to change the plan when needed
- An understanding of project schedule, resource needs, and risks
- A culture of realistic planning
|
Rules: |
- Every project has an assigned project manager with clear
responsibility and authority
- We manage project scope and prevent scope creep
- We keep the plan up-to-date, and under version control
- We involve team members in estimating tasks that will be
assigned to them
- We make sure that the schedule is realistic so that we can
fulfill our commitments
- We use metrics from past projects when estimating current tasks
- We maintain a schedule for each release
- We estimate and track project resource needs
- We discover and track project risks
- The project plan is kept on the project website where
any team member can access it
|
Project Monitoring and Control
Description: |
We actively track the project status and make adjustments to
the SRS and plan as needed. We avoid surprise failures because
we notice and react to small deviations as they are
happening.
|
Goals: |
- Every stakeholder always knows the project status
- Status reports are objective and accurate
- Status reports aid informed planning decisions
- A culture of project management
|
Rules: |
- Groups of team members produce weekly status reports
- Status reports include objective metrics of progress
according to the project plan
- Status reports track known risks and identify new risks
- Management reads status reports and refines plans
- Status reports are kept on the project website where
any team member can access them
- We record metrics (e.g., effort) that help with future planning
|
Supplier Agreement Management
Description: |
We carefully select and manage suppliers and subcontractors.
E.g., component and tool vendors, outsourced development work,
and localization partners. We consider the entire interaction
with them: from selection to delivery and
integration.
|
Goals: |
- Acquire the best tools and components
- Manage the risk of working with suppliers
- Smoothly transition and adopt acquired technology
|
Rules: |
- Every supplier selection committee reviews competing suppliers using
explicit criteria
- We always evaluate a subcontractor's ability to do the work
- Every external supplier agreement is legally binding
- Every supplier agreement specifies the deliverables in detail
- Every agreement specifies both process and product requirements
- Every agreement specifies post-delivery support
- We work closely with subcontractors to coordinate our efforts
- Every supplier relationship has an assigned manager
- We periodically review our suppliers and subcontractors and
make changes when needed
|
Measurement and Analysis
Description: |
We recognize that metrics are needed for decision-making by
management and individual team members. We make a consistent
effort to gather key metrics and actually use
them.
|
Goals: |
- Inform decision-making with key metrics
- Gather metrics with reasonable effort
- A culture of management by objective measures
|
Rules: |
- Every status report includes objective measures of progress
- We teach all stakeholders how to interpret the metrics
- We estimate the size of every proposed component
- We track our team velocity: progress per week in ideal
engineering hours
- Every project has infrastructure in place to collect certain
key metrics automatically
- We set planning goals in terms of objective metric
values
- We use size estimates and historic defects/KLOC to
predict the number of expected defects
- We record metrics (e.g., effort and defects/KLOC) that
help with future planning
|
Process and Product Quality Assurance
Description: |
We understand that quality is key to the development process
because it strongly affects our development schedule, the value
of the product, and our support costs. We build in quality
throughout the development process.
|
Goals: |
- Produce a high-quality product
- Determine and track the quality of the product
- Reduce wasted effort due to defects
- Reduce uncertainty and rescheduling due to defects
- A culture of building quality products
|
Rules: |
- Every project has an assigned QA manager or QA lead
- We start with high quality requirements and design
- We select high quality components and tools
- We keep the QA plan up-to-date, under version control
- We actually allocate QA resources as per the plan
- We actively build in quality and test for poor quality
- We test to the SRS
- We track all defects in an issue tracking tool
- We verify that fixes actually fix the entire problem
- We share QA status and metrics with all team members
- QA lessons learned are used for process improvement
|
Configuration Management > Version Control
Description: |
Software development involves constant changes to fragile
artifacts. We always control changes using version
control tools and change control boards to reduce
risk.
|
Goals: |
- We always know which changes are in a given release
- Management can approve or defer specific changes
- Merge/integrate changes into the product correctly
- We know exactly which version we are testing or releasing
- We can build any specific version at any time
- Keep a history of all versions and releases
- A culture of controlled change
|
Rules: |
- Every project has the needed infrastructure for VC: tools,
secure servers, admins, backups, etc.
- Every project has an assigned release engineer
- We keep all code and documents under VC
- We tag all internal and external releases
- We use an automated build system so that everyone builds the
product from source code the same way
- The change control board reviews changes before integration
into a release
- The change control board adjusts its policy to fit specific
components or points in the release cycle
- We use branches to separate changes intended for
future releases from current work
- We use branches to separate experimental or risky changes
until they are approved for integration
- Only officially tagged and approved releases are
deployed
- The checked-in code always compiles and passes basic
tests
|
Configuration Management > Issue Tracking
Description: |
Software development consists of thousands of requests for
changes and repairs. Everyone on the team always
tracks change requests and defects using one issue tracking
tool.
|
Goals: |
- All issues are tracked
- All issues are tracked in one tool
- Management can approve or defer specific issues
- Keep a history of all issues
- A culture of task management
|
Goals: |
- Every project has access to the issue tracking tool
- We keep all change requests, defects, and task
assignments in the issue tracking tool
- We estimate the cost and impact of requested changes
- Developers only make changes in response to assigned issues
in the issue tracker
- Commit messages always reference the ID of the issue being
worked on
- The change control board reviews issues before scheduling
work on them within a given release
- Issue status and comments are the main data used in
evaluating project status
- Comments in issues and commit messages are used to
generating release notes
|
Selected CMMI-SW 1.1 Level 3 Process Areas
Requirements Development
Description: |
We follow a defined process for gathering user needs and
transforming them into the SRS.
|
Goals: |
- A solid understanding of user needs
- A solid understanding of market opportunities
- An SRS with strong potential for market success
- A culture of building great products
|
Rules: |
- We seek out and work with potential customers and other stakeholders
- We gather user needs from all stakeholders
- We analyze the market and competing products
- We prioritize user needs based on market opportunities
- We use interviews, surveys, and prototypes in gathering user
needs
- We involve stakeholders in writing user stories
- We specify functional, non-functional, and environmental
requirements
- We write use cases and feature specifications that
complement each other
- We work with stakeholders to validate requirements
|
Technical Solution
Description: |
We follow a defined process when designing and implementing the
system.
|
Goals: |
- Selection of the best design approach
- A high quality design and implementation
- Improved shared understanding of the design and implementation
- Reduced effort in later design rework
|
Rules: |
- We consider several design approaches and evaluate them for feasibility
- We use UML to specify our designs and component interfaces
- Each component's interfaces are specified in detail
- We consider key qualities (e.g., security) early in design
- We conduct weekly design and code reviews
- We conduct design and code reviews at milestones
- Design documents are kept up-to-date, and under version
control
- We actually follow the design during implementation
- We follow coding standards and style guides
- Unit testing is done during development
- User and developer documentation is produced during development
- We build components with integration in mind
|
Verification
Description: |
The software system must conform to its specification.
Despite our best efforts to build-in quality, there will always
be some defects. During verification, we follow defined
processes to test the product against the SRS.
|
Goals: |
- Uncover software defects so that they may be repaired
- Measure confidence in the quality of the system
- Gather quality metrics for use in later planning
|
Rules: |
- We allocate adequate time and resources for validation
- We conduct design and code reviews
- The SRS must be precise enough to be used in testing
- We test to the SRS
- Every project has access to QA resources: e.g., tools and servers
- We design a test suite that can be carried out repeatedly
- The test suite is kept under version control
- We consider the testability of our design and implementation
- Test coverage is measured against stated goals
- We keep an organized log of all reviews
- We keep an organized log of all test runs
- We track all discovered defects in the issue tracker
- We select issues to fix in this release and actually fix them
|
Validation
Description: |
If we are not careful, it would be easy to build the wrong
product. Changing market conditions and the complexity of the
product make it even harder to be sure that we have the correct
requirements. We actively work with stakeholders to validate
our requirements.
|
Goals: |
- An improved SRS based on stakeholder feedback
- Reduce the risk that we are building the wrong system
- Improve stakeholders confidence in our understanding of their needs
|
Rules: |
- The SRS must be precise enough to be used in validation
- We allocate adequate time and resources for validation
- Use cases are validated with cognitive walk-throughs
- We show mock-ups or prototypes to customers to get feedback
- We track validation issues in the issue tracker
- We actually update the SRS to fix validation issues
|
Organizational Process Focus & Definition
Description: |
We believe that a good software development process is an
organizational asset that has real value. Everyone on the team
is involved in creating and following our software development
process. We fully train our team on our own process. People
respect the process when they participate in defining and
improving it.
|
Goals: |
- Each team member understands the process
- Team members view the process as useful and productive
- Team members actually follow the process
- Team members improve the process rather than ignore it
- A valuable library of process assets
- A culture of process improvement
|
Rules: |
- We allocate needed resources to our SPI group
- We allocate resources for development process
infrastructure: e.g., tools, servers, training
- Our SPI group maintains our SDM, templates, and other
process assets
- We train all team members in our SDM
- Our SPI group observes how projects use the SDM
- SDM changes may be proposed by team members
- Our SPI group proactively improves the SDM
- We satisfy process requirements from customers
and regulators
- We do postmortem reviews to gather SPI recommendations
- SPI efforts are separate from personnel performance reviews
- Project metrics are kept and used in estimating
- Project document templates are maintained and refined
|
Organizational Training
Description: |
On-going training is key to maintaining the value of our
staff and our organization's ability to develop software. We
show that we take training seriously by establishing
expectations, planning training activities, and allocating
resources.
|
Goals: |
- An up-to-date assessment of our training needs
- Increased value of staff skills
- Reduced staff turn-over
- A culture that values skills and supports training
|
Rules: |
- We allocate resources needed for project-specific training
- We identify organization-wide training needs
- We allocate resources needed for organization-wide training
- We plan for team members to have 8-16 training days each year
- We assess the effectiveness of training and adjust our plans
|
Integrated Project Management
Description: |
In a large organization, not all projects can or should
follow exactly the same process: some projects will need to
tailor the standard process to their specific needs. We have
defined ways of tailoring the process to fit specific projects
while retaining much of the value of the standard
process.
|
Goals: |
- An SDM that better suits a specific project
- A culture of tailoring the SDM rather than ignoring it
|
Rules: |
- Each project starts with the standard SDM and tailors it
only as much as needed
- The project-specific SDM satisfies process requirements of
the customer
- Each project maintains project-specific SDM documents
- Proposed project-specific SDM changes are reviewed by team members
- Each project actually follows its tailored SDM
- Lessons learned are applied back to the standard SDM
|
Risk Management
Description: |
Risk management means identifying potential problems and
planning to deal with them to protect the project. We evaluate
risks based on their likelihood and impact. We track risks
throughout the project and take mitigating actions to reduce
their likelihood or impact.
|
Goals: |
- A set of identified project risks
- A set of actions to mitigate risks
|
Rules: |
- We allocate resources for risk management
- Risks are identified during project planning and tracked
throughout the project
- Risks are classified by likelihood and impact
- We plan actions to mitigate risks
- We actually carry out mitigation actions as needed to reduce
the biggest risks
- We make contingency plans to deal with risks that turn into
actual problems
|
Selected Agile Practices
Steer, Don't Aim
Description: |
We focus on achieving near-term results before planning too
far into the future. Rather than plan now for an uncertain
future, we keep our options open and plan small iterations as we
go, using the most up-to-date information available at the
time.
|
Goals: |
- Solid plans for realizing near-term potential
- Realistic task estimates
|
Rules: |
- We steer the project toward success, one iteration at a time
- We plan iterations lasting 2-4 weeks
- We prefer plans and designs that reduce the cost of change
later
- Tasks are estimated in ideal engineering hours
- Schedules are based on the team's velocity (ideal engineering
hours accomplished per week)
|
The Planning Game
Description: |
At the start of each iteration, we work with the customer to
select a few high-priority requirements to be addressed in that
iteration.
|
Goals: |
- Direct, effective customer involvement in requirements management
- Narrowly-scoped requirements for the next iteration
- A track record of successfully delivering small releases
|
Rules: |
- We write down user stories and other development tasks
- We always have a customer (or customer representative) on the team
- We estimate user story effort and priority
- We negotiate with the customer to select a set
of user stories for the next release
|
Small Releases
Description: |
One of the best ways to manage project risk is to make
small, incremental releases. Complex systems tend to evolve
faster when they move through a series of stable intermediate
versions.
|
Goals: |
- An early first release
- Stable code that allows us to take the next release in a new
direction
- A track record of successfully delivering small releases
|
Rules: |
- We start by building the smallest, simplest working system possible
- We then make a long series of small upgrades, each of which
also works
- We practice the entire SDM (including QA) on each release
|
Simple Design
Description: |
We focus our design efforts on the requirements for this
release, without complicating it with concerns about future
requirements. We do that because we feel that simplicity is key
to flexibility, and flexibility is key to having the best
requirements and design in the end.
|
Goals: |
- A simple, maintainable design
|
Rules: |
- We prefer simple designs over complex ones
- We only respond to current requirements in our designs
- We eliminate duplication in the design
- We solve problems only when they occur
|
Test-First
Description: |
We test exactly to the SRS by merging the test suite with
the SRS. Rather than write elaborate SRS documents (which are
unlikely to be maintained), we write executable specifications
in the form of test cases: i.e., the system is correct if it
passes these tests. The act of writing test cases helps
developers better understand the requirements before
implementing them.
|
Goals: |
- A formal specification in executable code
- An improved understanding of the requirements
- A working test suite, early in the release cycle
- An objective measure of project status: test results
- A culture of practical quality
|
Rules: |
- We write simple, automated tests for every unit of code
- When implementing the code, we keep working until it passes the tests
- We rerun tests frequently to find defects and measure
progress
- We design our code to be easily and fully testable
- We evaluate our test suite by measuring test coverage
- We update test cases when requirements change
|
Refactoring
Description: |
We are not afraid of change, we embrace it. We purposely
start with designs that satisfy only the highest priority
requirements, knowing that the designs must be changed later.
When designs must change, we try to make the changes as
productively and reliably as possible.
|
Goals: |
- The ability to revise the design over time
- The best and simplest design that satisfies current requirements
- A culture of constant design improvement
|
Rules: |
- We use refactoring tools that make it easy to make design changes
- We prove to ourselves that each refactoring maintains
desired qualities
- We fully test the refactored design
|
Pair Programming
Description: |
Two brains are better than one when programming. The task
of programming demands mental attention at two levels: the
semantics of the code being written, and the mechanics of using
the development tools and following style guidelines. When two
programmers work together, they can catch each others' errors
and think more strategically.
|
Goals: |
- Higher quality code due to errors caught early
- Improved shared understanding of the implementation
- Peer-to-peer training integrated into daily activity
- Improved morale and job satisfaction
|
Rules: |
- Every project has the infrastructure needed (e.g., office
layout, chat software, etc.)
- Developers may choose pair programming
- One developer codes, the other thinks about the design and
catches errors
|
Collective Code Ownership
Description: |
We believe that the entire system is the responsibility of
the entire team. We encourage each developer to understand much
of the entire system, and we expect almost any developer to be
able to carry out any particular task.
|
Goals: |
- Improved shared understanding of the code
- Increased flexibility in assigning tasks to developers
- Reduced delays due to individual developer bottlenecks
- Improved continuity during staff turn-over
|
Rules: |
- Any developer can change any code, as needed
- Everyone codes to the same stylistic standards
- Every developer maintains an understanding of most every
component
- We maintain automated regression tests to catch defects
introduced during changes
|
Continuous Integration
Description: |
In traditional waterfall processes, the system integration
phase tends to accumulate unseen risks and delay the evaluation
and mitigation of those risks until too late. Instead, we
continuously integrate changes from all developers into the
product and deal with conflicts immediately.
|
Goals: |
- An improved understanding of component interactions
- A working system at nearly any point in time
- Reduced risks of late integration problems
|
Rules: |
- We maintain one current latest version of the
entire product
- We merge changes from different developers very frequently
|
40-Hour Work Week
Description: |
Development tasks require thoughtful decisions that should
not be made under stress. We believe that developers should
have a sustainable career that does not burn them
out.
|
Goals: |
- More realistic schedules from the start
- Increased job satisfaction and performance
- Reduced staff turn-over
|
Rules: |
- Our developers work reasonable hours
- We deal with schedule problems by re-planning rather than
adding unpaid overtime
|
Selected Open Source Practices
Provide universal, immediate access to all project
artifacts
Description: |
The heart of the open source method is the fact that the
program source code is accessible to all project
participants. Beyond the source code itself, open source
projects tend to allow direct access to all software development
artifacts such as requirements, design, open issues, rationale,
development team responsibilities, and schedules.
|
Goals: |
- Easy access to all project artifacts
- Easy assessment of project status
- A culture of process transparency
|
Rules: |
- We provide collaborative infrastructure including issue
tracking, version control, and mailing lists
- Every project maintains an up-to-date project website
- Team members can always work on up-to-date versions of
files
|
Work in communities that accumulate software assets
and standardize practices
Description: |
Collaborative development environments (CDEs) reduce the
effort needed to start a new project by providing a complete,
standard tool-set. They warehouse reusable components, provide
access to the developers that support them, and make existing
projects in the communities accessible as demonstrations of how
to use those tools and components.
|
Goals: |
- A library of reusable components
- Visible examples of reuse
- An organization-wide standard tool set
- A bottom-up adoption of the SDM
|
Rules: |
- We use mostly the same tools across the entire organization
- We allow teams to access other projects for reference
- We maintain an organized library of reusable software assets
- Developers continue to support reusable components that they
authored
|
Follow standards to validate the project, scope
decision-making, and enable reuse
Description: |
The lack of formal requirements generation in open source
projects tends to encourage reliance on externally defined
standards and conventions. Deviation from standards is
discouraged because of the difficulty of specifying an
alternative with the same level of formality and agreement among
contributors. Standards also define interfaces that give choice
to users and support diversity of usage.
|
Goals: |
- High quality requirements with less effort
- An SRS that emphasizes interoperability
|
Rules: |
- We prefer existing standards over our own
new specifications
- When evaluating project proposals, we favor adherence to
standards
- We use the standard to avoid or resolve disagreements over
requirements
- We use standard interfaces to promote interoperability
|
Practice reuse and reusability to manage project scope
Description: |
Open source projects that start with significant reuse tend
to be more successful because they can demonstrate results
sooner, they focus discussions on the project's value-add, and
they resonate with the cultural preference for reuse.
Spinning out a reusable component is encouraged because it
fits the cultural preference for reuse, and often gives a
mid-level developer the social reward of becoming a project
leader.
|
Goals: |
- Reduced project scope due to reuse of existing components
- Increased product quality due to reuse of high quality components
- A library of reusable components
- A culture of reuse
|
Rules: |
- When evaluating project proposals, we favor planned
reuse
- We prefer to reuse rather than write our own new code
- We prefer supported reusable components over maintaining our
own code
- We encourage new projects that produce or extract reusable
components
|
Release early, release often
Description: |
Open source projects have very low overhead for each
release. That allows them to release as early and often as the
developers can manage. A hierarchy of release types is used to
set user expectations: "stable", "development", and "nightly".
In fact, open source projects release pre-1.0 versions to
attract the volunteer staff needed to reach 1.0. Reacting to
the feedback provided on early releases is key to requirements
gathering and risk management practices in open
source.
|
Goals: |
- Low overhead to the release process
- Early releases that help validate requirements
- Frequent releases to force product stabilization
- Frequent releases to gather user feedback
- Appropriate quality expectations for each release
|
Rules: |
- We use automated build tools to build and package the entire
product
- We have a low-overhead process for authorizing and producing
development releases
- When evaluating project proposals, we favor early release
plans
- We do development work in small increments so that a new
release can be produced at almost any time
- We show early releases to customers and gather feedback to
validate requirements
- Each release is clearly labeled with its release type
|
Place peer review in the critical path
Description: |
Feedback from other developers is central to open source
development. Often, only a core group of developers can commit
changes to the version control system; other contributors must
submit a patch that can only be applied after review and
discussion by the core developers. It is common for each change
to generate an automated email notification that can prompt peer
review. The claim that "given enough eyeballs, all bugs are
shallow" underscores the emphasis of peer review.
|
Goals: |
- Higher quality due to early peer review
- Better shared understanding of the code
- A culture of review and discussion
|
Rules: |
- "Buddy reviews" are done at key times in the development
process and for risky changes
- We encourage shared understanding of the code
- We provide infrastructure for change notification emails
- Time is allocated for peer review
|
Company Proprietary