10.8.6  Secure Application Development

Manual Transmittal

January 24, 2013

Purpose

(1) This transmits Internal Revenue Manual (IRM) 10.8.6, Secure Application Development. This IRM applies to the code development or modification of commercial off-the-shelf (COTS) and government off-the-shelf (GOTS) software for use within the IRS.

Background

This IRM establishes comprehensive Information Technology (IT) security policies and provides guidance to all IRS organizations developing or modifying application code for use within the IRS. This IRM shall be used in conjunction with coding guidelines found in IRM 2.5.3, Systems Development, Programming and Source Code Standards and the Application Development Program Management Office (PMO) Security's Final Secure Coding Guidelines document.

Material Changes

(1) The following sections have been updated/clarified with this version of policy:

  1. Changed name of Exhibit 10.8.6-4 from Requirements Identified for Relocation to IRM 2.5.3 to Recommended Best Practices. The recommendations listed in Exhibit 10.8.6-4, Additional Systems Development, Programming and Source Code Standards, were identified in the July 31, 2007 version of IRM 10.8.6, for relocation to IRM 2.5.3, Systems Development, Programming and Source Code Standards. These recommendations are slated to be incorporated into a future revision of IRM 2.5.3. Until that time, they can be found in the exhibit table below and should be implemented where applicable.

  2. Changed name of Exhibit 10.8.6-5 from Checklists Identified for Relocation to IRM 2.5.3 to Checklists for Additional Systems Development, Programming, and Source Code Standards.

  3. Effective July 1, 2012, the Modernization and Information Technology Service (MITS) organization changed its name to IRS Information Technology (IT). All instances of MITS within this IRM have been updated to IRS Information Technology organization to reflect the change. (Link to IT website communication is: http://it.web.irs.gov/ProceduresGuidelines/ITNameChange.htm)

  4. Updated process and organization names (e.g., Security Test & Evaluation (ST&E) to SCA, Deviations process to Risk Based Decision)

  5. Editorial changes (including grammar, spelling, and clarification) made throughout the IRM.

Effect on Other Documents

IRM 10.8.6 dated August 31, 2010, is superseded. This IRM supplements IRM 10.8.1, Information Technology (IT) Security, Policy and Guidance.

Audience

IRM 10.8.6 shall be distributed to all personnel responsible for ensuring adequate security for IRS information and information systems. This Developer’s Guide is not intended as a primer for novice Program Developers/Programmers. The reader is expected to be well-versed and experienced in general systems engineering, software development, and software testing practices. The reader should have a thorough understanding of and experience with the development of software applications and web services and of the technologies involved. This policy applies to all employees, contractors, and vendors of the IRS.

Effective Date

(01-24-2013)

Terence V. Milholland
Chief Technology Officer

10.8.6.1  (08-31-2010)
Purpose

  1. This manual provides policies and guidance to be used by IRS organizations to carry out their respective responsibilities in information systems security regarding secure application development. It provides guidance on the creation and modification of Commercial off the Shelf (COTS) and Government off-the-shelf (GOTS) programs and applications.

10.8.6.1.1  (07-31-2007)
Overview

  1. It is the policy of the IRS to protect its information resources and allow the use, access, and disclosure of information in accordance with applicable laws, policies, federal regulations, OMB Circulars, and Treasury Directives (TDs). All IT resources belonging to, or used by the IRS, shall be protected at a level commensurate with the risk and magnitude of harm that could result from loss, misuse, or unauthorized access to that IT resource.

  2. This policy delineates the security management structure, assigns responsibilities, and lays the foundation necessary to measure progress and compliance. Requirements in this policy are subdivided under three major security control areas: management, operational, and technical.

10.8.6.1.2  (08-31-2010)
Scope

  1. The provisions in this manual apply to all offices, business, operating, and functional units within the IRS, and are to be applied when IT is used to accomplish the IRS mission. This manual also applies to individuals and organizations having contractual arrangements with the IRS, including employees, contractors, vendors, and outsourcing providers, which use or operate IT systems containing IRS data.

  2. The guidance in this document is not intended to comprise a new process capability model or software development methodology. Instead, it is intended to provide information that will help the reader select and adapt, augment, or extend an existing capability model and development methodology in ways that will greatly increase the likelihood that the software produced will be not only correct, high quality, reliable, and reusable, but also secure. In addition, this Developer’s Guide provides some practical best practices for developers to apply throughout the System Development Life Cycle (SDLC), within the framework of whatever process and methodology they work within.

  3. For coding guidelines, see IRM 2.5.3, Systems Development, Programming and Source Code Standards and the Application Development PMO Security’s Final Secure Coding Guidelines document.

10.8.6.1.3  (07-31-2007)
Authority

  1. IRM 10.8.1, Information Technology (IT) Security Policy and Guidance, establishes the security program and the policy framework for the IRS. If there is a conflict with or variance from this IRM and IRM 10.8.1, IRM 10.8.1 shall take precedence, unless the security controls/requirements within this IRM are more restrictive.

10.8.6.1.4  (07-31-2007)
IRM Section Topics

  1. This manual contains information on the following topic areas:

    • Purpose

    • General Policy

    • Management Controls

    • Operational Controls

    • Technical Controls

    • Deviations/Exceptions

    • Application Vulnerabilities (Exhibit 10.8.6-1)

    • Glossary (Exhibit 10.8.6-2)

    • References (Exhibit 10.8.6-3)

    • Recommended Best Practices (Exhibit 10.8.6-4)

    • Checklists for Additional Systems Development, Programming, and Source Code Standards (Exhibit 10.8.6-5)

10.8.6.2  (08-31-2010)
General Policy

  1. Recommended practices for Secure Application Development shall be implemented on all applicable platforms, in addition to the general requirements stated in this IRM andIRM 10.8.1.

  2. Programming language(s) in which an application component is written shall not include known vulnerabilities (e.g., susceptibility to buffer overflow) that could make the application vulnerable to compromise or failure.

  3. Use effective safeguards and countermeasures (e.g., security wrappers, precompilers, application firewalls) which have been implemented during development and deployment to counteract known vulnerabilities.

  4. Application(s) shall only use approved low-risk services, protocols, and technologies, unless there is a critical need for a higher-risk service, protocol, or technology.

  5. Application(s) shall have automated configuration tool(s) or script(s) that enables the administrator to set the parameters that govern the operation of the application’s security components and the characteristics of its security properties.

    1. The tool/script shall include a "feedback" capability that informs the administrator of possible security deficiencies in the configuration he/she has defined.

    2. For any application security component configuration that cannot be automated by a tool or script, manual configuration procedures shall be documented, and training should be provided to administrators on how to perform those procedures.

  6. Program code and applications used for development purposes shall be run on separate physical hosts from production systems.

  7. Program code and applications shall not be used until they comply with this IRM.

  8. See the Application Development PMO Security’s Final Secure Coding Guidelines for additional secure coding guidance.

10.8.6.2.1  (07-31-2007)
Roles and Responsibilities

  1. IRM 10.8.2, Information Technology Security Roles and Responsibilities, defines IRS-wide roles and responsibilities related to IRS information and computer security, and is the authoritative source for such information, see for additional information.

10.8.6.3  (07-31-2007)
Management Controls

  1. The IRS shall implement management security controls to mitigate risk of IT applications and electronic information loss in order to protect the organization's mission. See IRM 10.8.1 for general information on computer security management control requirements.

10.8.6.3.1  (01-24-2013)
System and Services Acquisition

  1. See the System and Services Acquisition section in IRM 10.8.1 for additional guidance.

10.8.6.3.1.1  (01-24-2013)
Developer Security Testing

  1. Establish and use appropriate security metrics during each security review/audit to measure the degree to which security criteria/requirements have or have not been satisfied.

  2. Ensure security testing is performed both on individual units/components and on the whole integrated application. A wide range of test techniques shall be combined in order to provide as broad and accurate a picture of the tested entity's security posture. This includes Static Application Security Testing (examining the source code) and Dynamic Application Security Testing (examining the application in its running state)

  3. Misuse cases shall be developed. The purpose of the misuse case is to exhaustively identify the types of attacks that can be made against the system or application and how a system or application should respond to such attacks and, if possible, recover from such attacks.

    Note:

    This approach goes a long way in addressing the security requirements of an application. A misuse case can also help prepare test cases purely to test the security strength of the system.

  4. Applications being developed shall meet the security requirements for the applicable OS environment it will function in.

  5. Developers shall conduct security testing of the security requirements being developed for the application to ensure the security features are functioning as desired.

    1. This security testing occurs during the development process and does not replace the Security Certification and Accreditation (SCA) testing that takes place.

  6. Developers shall resolve the issues found during development or include these items in the Security Assessments and carry them in a Plan of Action and Milestones (POA&M).

  7. See IRM 10.8.8, Information Technology (IT) Security, Live Data (LD) Protection Policy for additional guidance on the usage of live data.

10.8.6.3.1.1.1  (07-31-2007)
Code Based Elements Requirements

  1. An application process shall remove temporary objects from memory or disk before it terminates.

  2. The application shall adequately validate user inputs before processing them.

  3. The application shall be tested for vulnerable buffer overflows.

  4. The application shall include an explicit error and exception handling capability. Application error and exception messages displayed to users shall not reveal information that could be utilized in a subsequent attack.

  5. An application failure shall not result in an insecure application or system state.

10.8.6.3.1.1.2  (01-24-2013)
Environment Information

  1. Hardening procedures for the Operating System (OS) in use shall be defined and followed for systems.

  2. Only files and folders required for systems functionality shall be copied to the production folder(s).

  3. File upload(s) shall be restricted, to ensure that files can only be uploaded into intended locations, and the file size and type are appropriate for available resources.

  4. Scans for malicious logic shall be performed in accordance with IRM 10.8.1 and the IRM for the OS in use.

  5. Debug binaries shall not be used in production environments.

  6. The test environment shall be an approved close simulation of the intended production environment, in which the application will be integrated.

  7. IRS Information Technology (IT) approved test tools shall be used to support security test techniques whenever feasible.

  8. The application shall operate in the most secure configuration of the operating system, database, framework, and middle software in accordance with application policy.

  9. The application shall not be designed or require the use of default configurations or otherwise insecure configurations of any developmental or non-developmental components, whether in the application itself or its environment.

  10. The application shall allow for the disabling or removing of default account names, passwords, etc., used in the application or its environment.

  11. See Application Development PMO Security's Final Secure Coding Guidelines for additional secure coding guidance for additional environmental and design secure coding considerations.

10.8.6.3.1.1.3  (08-31-2010)
Secure Directory Access Control Configuration

  1. The operating system/framework access controls on the directories in which the application components are stored shall be configured to protect the application executable(s) from unauthorized access, modification, and deletion.

    1. The directories shall be configured to protect all sensitive application data, such as access control lists, security event logs, and other sensitive data used by the application (including data decrypted by the application before use)

    2. The directory configuration(s) shall also prevent the insertion of any file of any type (binary or text) by an entity that is not explicitly authorized to insert a file into the directory.

  2. All application and environment software executable(s) (both developmental and non-developmental), source code files, and documentation shall be placed under secure configuration management with strict access and version control.

    1. This procedure prevents any inappropriate access given the role of the person attempting the access and the life-cycle phase in which the attempt is made. For example, source code checked in prior to a code review should not be write-accessible after that; instead, if changes must be made, a new version of the code should be generated for the update (this will prevent a rogue developer from secretly inserting malicious logic into a code version that has already been approved by the reviewers).

  3. All security components in the execution environment (e.g., access control systems, cryptographic components) with which the application interacts, such as procedure calls, system calls, Application Programming Interface (API), or networking protocols, shall be configured based on least privilege and least functionality principles in accordance with IRM 10.8.1.

  4. All execution environment components (including host file system, databases, and other data stores) that store application executable, data, configuration, security, or include files shall be configured to prevent access to those files by any role other than the role assigned to the application itself and the role(s) assigned to the application’s administrator(s). All other roles, including user roles, shall be denied direct access to the resources.

  5. The application host's file system shall be configured to isolate security processes within the application.

  6. Non-developmental software components in the application or its execution environment shall be the latest versions, with all security patches applied.

  7. All default accounts (e.g., "nobody," "Administrator" ) in non-developmental server components shall be disabled or renamed unless the application absolutely cannot run without those specific account names.

  8. All default accounts shall have their default passwords changed upon installation, in accordance with IRM 10.8.1.

  9. All sensitive data at rest shall be protected in accordance with IRM 10.8.1.

  10. All cryptography shall be in accordance with IRM 10.8.1.

10.8.6.3.2  (01-24-2013)
Documentation

  1. Security features, service levels, and management requirements shall be specified and documented in accordance with IRM 10.8.1.

  2. Security requirements shall be included in the overall application requirements specification and traceability matrix. A separate security requirements specification and matrix shall not be generated.

  3. The results of the application's security integration tests shall be provided as input to the security risk assessment of the application required before it can be deployed.

  4. After moving into production, the application's security posture shall be periodically reviewed to ensure that new vulnerabilities have not emerged. Specifically, impact assessments should be performed every time a significant change is made to the application itself or its execution environment, such as (but not limited to): substitution of a different product, application of a patch, change of configuration parameters, etc. Refer to the Glossary of this IRM for a definition of a significant change.

10.8.6.3.3  (08-31-2010)
Security Assessment and Authorization

  1. Refer to IRM 10.8.1, Information Technology (IT) Security, Policy and Guidance for additional information.

10.8.6.3.3.1  (08-31-2010)
Security Assessments

  1. Security Assessments (initial, iterative, and re-assessments) shall be performed throughout the software's lifecycle.

  2. Refer to the Application Development PMO Security's Final Secure Coding Guidelines document for additional secure coding security assessment information.

10.8.6.3.3.2  (01-24-2013)
Plan of Action and Milestones

  1. The results from security assessments shall be entered into the Plan of Action and Milestones (POA&M) for management approval or risk acceptance.

10.8.6.3.4  (01-24-2013)
Security-Related Activity Planning

  1. Security requirements shall be derived during the design phase and a traceable matrix developed.

  2. Security reviews and software code scans shall be implemented at each phase or milestone of development.

10.8.6.3.5  (08-31-2010)
Risk Assessment

  1. For general information on Authorization Risk Assessment Methodology, please contact your Business Unit Security Program Management Officer (PMO).

  2. Risk management activities and checkpoints shall be integrated throughout the software life cycle.

  3. Threat modeling shall be performed to assess the level of risk that the threat will occur and its potential impact if it does.

  4. All developed and modified code shall have a security risk assessment conducted prior to being deployed.

10.8.6.3.5.1  (01-24-2013)
Vulnerability Scanning

  1. For information on the use of tools needed for code review, see the Information System Monitoring Tools and Techniques section of this IRM.

10.8.6.4  (07-31-2007)
Operational Controls

  1. The IRS shall implement operational security controls, which are primarily implemented and executed by personnel for each information system. See IRM 10.8.1 for general information and computer security operational control requirements.

10.8.6.4.1  (08-31-2010)
System and Information Integrity

  1. See IRM 10.8.1 for IT Security, System and Information Integrity requirements.

10.8.6.4.1.1  (01-24-2013)
Range and Type Errors

  1. Buffer overflows shall be prevented through the use of appropriate programming languages, strong typing, bounds checking, and other mechanisms available to the code writer (e.g., use of Microsoft’s Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) and similar stack protection facilities found in other development environments).

10.8.6.4.1.2  (01-24-2013)
Security Functionality Verification

  1. Refer to the Developer Security Testing section for practices.

  2. A multifaceted test approach shall use as wide a variety of techniques and technologies as time and resources allow. These techniques may include, but not be limited to, the following:

    1. Penetration testing.

    2. Encryption testing.

    3. Security attack testing.

    4. Fuzz testing.

    5. Automated vulnerability scanning.

  3. Software testers shall use both white and black box testing, as appropriate to the software development phase.

  4. A Security Test Plan shall be developed to include test cases, including abuse and misuse cases, that demonstrate the following:

    1. The software behaves consistently and securely under all conditions, both expected and unexpected.

    2. If the software fails, the failure does not leave the software, its data, or its resources exposed to attack.

    3. The removal from final source code prior to the final build of dormant, dead code, debug, or test harness code constructions.

      Note:

      For purposes of security analyses, debug and test-harness code may remain in code submitted to static source security analysts. Testers shall ensure that the risks of such code as described above is specifically profiled in risk profiles and assessments.

    4. Interfaces and interactions among components at the application, framework/middleware, and operating-system levels are consistently secure.

    5. Exception and error handling resolve all faults and errors in ways that do not leave the software, its resources, its data, or its environment vulnerable to unauthorized modification (disclosure) or denial of service.

10.8.6.4.1.3  (08-31-2010)
Information System Monitoring Tools and Techniques

  1. Security testing tools and static analysis code scanning tools shall be used with the ability to test against the following classes of code vulnerabilities:

    1. Security-related functions.

    2. Input-output validation and encoding errors.

    3. Error handling and logging vulnerabilities.

    4. Insecure components or API connections.

    5. Coding errors.

10.8.6.4.2  (08-31-2010)
Configuration Management

  1. See IRM 10.8.1 for IT Security Configuration Management (CM) requirements.

  2. Before each code review begins, design documentation and code to be reviewed shall be checked into the CM system and baselined.

  3. Every configuration item shall be checked into the CM system as a baseline before it is reviewed or tested.

  4. Roles and duties in the CM system shall be separated (development, test, and production with corresponding personnel should be assigned different non-contiguous roles with separate access rights to the system).

10.8.6.4.2.1  (08-31-2010)
Patch Management

  1. Security Patch Management shall be implemented in accordance with IRM 10.8.1 and IRM 10.8.50, Service-wide Security Patch Management.

  2. The system shall have patch management with minimal user intervention.

  3. The system shall be able to determine and report on patch and risk status for system components.

  4. From implementation through retirement of the application, all available security patches shall be applied to the application's COTS components. All intrusion detection systems, application firewalls/gateways, virus scanners, etc., used in the application's execution environment shall also be updated with all available new signature files.

10.8.6.4.3  (01-24-2013)
Awareness and Training

  1. In accordance with FISMA Specialized IT Training, the development team shall be trained in basic privacy law requirements (such as Privacy Act requirements and E-Government Act requirements), application development-related best practices (including threat modeling, misuse cases, test tools), and overall development security practices for implementing secure code.

  2. Refer to IRM 10.8.1,Information Technology (IT) Security, Policy and Guidance for additional information.

10.8.6.4.4  (08-31-2010)
Personnel Security

  1. Refer to IRM 10.8.1 for Personnel Security requirements for software developers.

10.8.6.5  (07-31-2007)
Technical Controls

  1. The IRS shall implement technical security controls and ensure the design of IT systems that process, store, or transmit all information shall include, at a minimum, the technical security requirements discussed in this IRM. See IRM 10.8.1 for general information and computer security technical control requirements.

10.8.6.5.1  (01-24-2013)
Access Controls

  1. Access control shall be enforced by presentation layer in addition to implementation at other layers.

  2. Buttons and links for functions and assets that are not authorized for the user shall not be displayed. This is not to say that access control should not solely rely on presentation layer.

  3. The application shall not rely on "security through obscurity" in lieu of proper defense in depth techniques to protect application data, resources, and executable(s). Security through obscurity techniques should be used only as a "nice to have" addition to adequate defense in depth techniques that on their own reduce risk to the maximum acceptable level.

  4. It shall not be possible to bypass Authentication by alternative paths/URLs.

  5. The intended restricted directory access policy shall be enforced to prevent attackers from accessing unauthorized files inside and outside the restricted directory.

  6. The security policy for the application shall deny by default all permissions and access privileges. The application shall be designed to require the application's administrator to explicitly assign all permissions and access privileges to both human users and software processes, regardless of whether those permissions and privileges are granted individually, to user groups, or associated with roles or other attributes.

  7. No unauthorized entity (human or process) other than the authorized administrator and the application's authentication mechanism may lock out legitimate users to prevent Denial of Service (DoS) attacks.

  8. The application shall not allow any user other than the authorized administrator to authorize or change privileges assigned to users.

  9. Entities (human users and processes) shall not be allowed to retain privileges "for future use" . Instead, a privilege should be assigned to the entity when it is needed, and revoked as soon as the function requiring the privilege has been completed.

    1. The development team lead shall ensure access privileges are reviewed, periodically, during the development process and remove unnecessary privileges.

  10. The application shall not allow any user to gain access to the application without first being positively authenticated by the application's authentication mechanism, unless the application is a public access application.

  11. The application shall not allow any unauthorized user to gain access to restricted functions. For example, an attacker should not be able to access a restricted function simply by entering the pathname (i.e., URL/URI) that points to that function.

  12. All interfaces and connections relied on by the application shall function correctly, securely, and cannot be subverted by an attacker.

    1. During application testing, verify that all secure interfaces and connections relied on by the application (from network physical layer up to application layer) function correctly, are connected to the intended systems, and cannot be easily subverted by an attacker.

  13. Database management system and host files system shall have access control to prevent unauthorized users from modifying any of the metadata associated with the database objects or XML document.

    1. The application’s host file system and database management system (DBMS, if any) access controls shall be configured to prevent unauthorized users from modifying any of the metadata associated with a database object or XML document.

  14. An application shall not allow direct access to the database management system and ensure that no features shall be exploited to bypass database management system access controls.

    1. An application that provides access to a database shall not contain features that can be exploited by a user to bypass the database management system’s access controls in order to directly modify the file system objects (files, directories) containing the database entries/records. The access controls of the file system directory containing those objects must be configured to prevent access by any role except the appropriate privileged role (e.g., database administrator).

    2. An application that provides access to a database shall not contain features that can be exploited by a user in order to assume the identity and permissions of a privileged user role (e.g., the database administrator) in order to access database objects in ways that the user’s own role would not permit.

    3. The access controls relied upon by a database or record management application shall be configured to prevent unauthorized users from modifying or deleting any established associations, reference links, or other relationships between database objects or records accessible via the application.

    4. The application shall ensure that every entity is authorized, through granting of appropriate privileges, to perform the functions it attempts to perform, and to access the resources/data it attempts to access in the specific way it attempts to access them.

    5. The application’s authorization service shall assign privileges to all entities based on a Role-Based Access Control (RBAC) scheme that is implemented and enforced by the discretionary and mandatory access controls that protect the application’s data and resources. At a minimum, privileged accounts (e.g., the administrator account) must be assigned unique roles.

    6. If the application’s authorization service supports the creation of group accounts, it shall also enable the assignment of privileges to group accounts.

    7. The application’s authorization service shall ensure that every entity relinquishes the privilege it has been granted as soon as it finishes performing the function for which the privilege was required. The application shall be designed so that its own processes relinquish their privileges as soon as they complete the operations for which those privileges were required. Even if an entity or process will need the same privilege later, the privilege must be assigned only at the time it is needed, and must be relinquished immediately when it is no longer needed.

    8. The access controls relied on by the application must be configured to prevent unauthorized entities from reading, modifying, augmenting, deleting, moving, or (in the case of executables) executing:
      1. The data written by the application.
      2. The configuration and security data used by the application.
      3. The application’s executables.

  15. The application shall not rely solely on database views or portal pages for access control.

    1. The following techniques shall not be relied upon to control access to data in a database or on a web server:
      1. Absence of certain data from a database view.
      2. Absence of certain web links from a portal page or web page.

    2. The application shall prevent an unauthorized user from modifying, overwriting, or augmenting the privileges assigned to an individual account, group account, or role.

    3. The application shall not be able to be used to override any of the file system or database access controls configured for a data object that is made accessible by the application. Specifically, the application must not allow a user to perform any unauthorized operation to a data object if the database/file system access controls protecting that data object are configured to prevent the user (or the user’s role) from performing that operation. Furthermore, if a user attempts to use the application to perform any unauthorized operation, the application must issue a warning message to the user informing him/her that he/she is attempting to perform an unauthorized operation.

    4. The application shall prevent any entity from performing application functions that entity’s authorizations do not explicitly permit it to perform.

    5. The application shall prevent an entity from performing any functions which the entity’s role is not explicitly granted permissions to perform.

    6. A portal/web application’s access controls shall not be implemented so that the user identification and authentication (I&A) dialog is only invoked when the user clicks on a link on the portal/web page, but should also be invoked when the user enters the URL/URI by typing it into the browser’s "Location," "Address," "Open," etc. line or choosing it from a list of saved bookmarks. Regardless of the method used by the user to enter a URL/URI, the application must invoke the necessary user I&A dialogue before granting the user access to the resource indicated by the URL/URI.

    7. Non-privileged processes shall not be granted access to privileged functions or data associated with privileged functions (e.g., security configuration files, security directories and databases, critical security parameters). Furthermore, non-privileged processes must not be granted any privileges associated with privileged roles.

    8. The application shall contain few or no processes that require privileged roles. The only processes that require privileged roles should be those that:
      1. Perform critical security functions, such as I&A, authorization, auditing, etc.
      2. Invoke external security functions (e.g., cryptographic modules).
      3. Access security data stores via trusted interfaces.

    9. Clients or proxy agents shall not be allowed to invoke privileged application processes.

    10. The application host’s file system access controls, discretionary and mandatory, shall ensure that entities are granted or denied access to the application’s executable(s), data, and resources based only on the privileges authorized to those entities under the application’s RBAC scheme.

    11. The access controls of the file system or data store to which the application writes user-created data shall prevent all users except the data’s creator from assigning or changing the discretionary access properties (read, write, delete) of the data he/she created while using the application.

    12. Privileged roles shall be granted only those privileges they need to perform their privileged functions. Entities that need to perform both privileged and non-privileged functions should be assigned two different accounts, one for the privileged role, and one for the non-privileged role. It should not be possible to perform non-privileged functions when logged into the privileged account, or to perform privileged functions when logged into the non-privileged account.

    13. Access control matrix shall be used to plan specific rules.

    14. The application shall not disclose actual object identifiers:
      1. Define a clear approach for protecting each type of object.
      2. Design a solution for masking actual object references.

    15. The account used to access the database shall have the minimum amount of privilege required by the application.

    16. Database references shall not expose primary/foreign keys, column and table names. Object references through session object shall be used.

    17. Where clause shall be used to enforce access control to ensure that expected relationships remain true (e.g., current user is owner of referenced object).

  16. The application shall verify that the actual results match the expected results criteria.

    1. For defense in depth, it is important to verify that the results match (e.g., if a single record is expected, ensure that only one record is returned).

  17. Use Data Access APIs to abstract and encapsulate all access to the data source.

  18. The application shall deter DoS attacks by:

    1. Authenticating before allocating resources.

    2. Not handling too many requests from a single user.

    3. Protecting bottlenecks in code and database access.

    4. Cancelling requests that are superseded by a new incoming request.

  19. The application shall continue functioning, possibly in a degraded mode, when subjected to input patterns that indicate a denial of service attack. If the application must shut down due to such input patterns, such a shut down should be "graceful" . The application must not fail in such a way that exposes either its executable image or any sensitive information held in its temporary memory (e.g., cache, compromised program module) to unauthorized access by an attacker.

  20. Any remote procedure call (RPC) issued by an application process to an execution environment component or to a remote application component shall not cause that remote component to perform any unauthorized or unexpected actions that could compromise the data, configuration files, security files, or executables owned or used by the remote component.

  21. An application shall verify any environment variables for correctness and report any deficiencies to the administrator.

    1. Before acting on an environment variable, the application shall verify the correctness of that variable (in terms of having an expected value), and shall report to the administrator any deficiencies in the variable (e.g., an unexpected value). The application shall verify that the resources and conditions expressed by the environment variable are:
      1. Present.
      2. Adequate.
      3. Operational.
      4. Correct, as necessary to ensure secure operation and interaction between the application and the environment component/function.

  22. An application acting as web front-end to a database or other data store shall grant users only read-only access to the backend database/data store. User shall not be able to directly read or delete from the database/data store.

    1. To prevent the possibility of structured query language (SQL) injection or other command injection attacks, an application that acts as a web front-end to a database or other data store shall grant users only read-only access to the backend database/data store. Users shall not be allowed to directly write to or delete from the database/data store. Instead, all user requests to update/delete should be made via user input in HTML, XHTML, or XML forms implemented by the front-end application, then converted by that application into a syntax (e.g., SQL) that can be understood by the backend database/data store. The web database front-end application should reject any user input that contains expressions in SQL, a scripting language, or a procedural language.

  23. The application shall isolate any high-risk services to a separate execution domain.

    1. Any high-risk services used by the application must be isolated by the application’s host file system within a separate execution domain partition, ideally on a separate hard drive.

    2. The application host’s file system access controls must be configured to protect the availability and integrity of the application’s executable(s), and all files and directories the application needs to read from or write to during its execution. In addition, the application should include or invoke defense-in-depth mechanisms, such as encryption and digital signature, to augment the file system access controls, in order to ensure continued protection of the confidentiality, integrity, and availability of the application’s data, configuration, security, and executable files in case the file system access controls fail or are compromised.

    3. The application shall write all data/files to a different file system directory from that containing the application’s executable(s).

  24. Application execution shall not cause any file system objects that belong to any entity other than the application itself to be modified, deleted, overwritten, or substituted. In addition, the application should not contain vulnerabilities that could be exploited by an attacker to perform any such unauthorized modifications, deletions, overwriting, or substitutions.

    1. Privileged processes in custom-developed components shall ensure that untrusted processes are not invoked in non-developmental components.

    2. Untrusted processes in non-developmental components shall not be permitted to invoke privileged processes in custom-developed components.

  25. The application shall enforce access controls against path guessing or listing of files and directories.

    1. Users who are able to infer the file system directory structure indicated by the relative pathnames in user-viewable source code should be prevented, by the configuration of the file system access controls, from:
      1. Listing the contents of any file system directory (e.g., to discover unpublished resources).
      2. Directly accessing at the file system level any file or resource stored on the application host.

    2. A server application or web service shall validate the entity that sends it any data intended for display to users (e.g., web page content), and should not display that data if the sending entity cannot be verified as trustworthy.

    3. A certificate shall not be used without first checking its expiration.

    4. Unexpired host-specific certificate data shall be validated, while the certificate read was valid, to ensure its for the site originally requested. If the host-specific data contained in a certificate is not checked, it may be possible for a redirection or spoofing attack to allow a malicious host with a valid certificate to provide data, impersonating a trusted host.

    5. A Public Key Encryption (PKE) application that performs I&A shall not authenticate any entity that presents a certificate whose status cannot be validated by application’s certificate validation service using an approved standard certificate validation technology.

10.8.6.5.2  (08-31-2010)
Identification and Authentication

  1. Applications shall use Department of Treasury Public Key Infrastructure (PKI) when required in accordance with IRM 10.8.1 and IRM 10.8.52, Information Security (IT), PKI Security Policy.

  2. Applications shall not accept or utilize invalid certificates.

  3. Applications shall enforce user or client authentication.

  4. Expensive (processor or memory intensive) operations shall not be allowed before the entity has been authenticated to prevent denial of service attacks.

  5. Identification and authentication shall take place before access, modification or transfer of personal information.

    1. Before allowing an individual to access, modify or transfer his/her personal information, the application shall authenticate that individual based on his/her username/password or another approved credential the robustness of which equals or exceeds the robustness of username/password.

  6. A server application or provider web service must present an authentication credential to any client or requestor service that asks for that credential in order to authenticate the server application/provider before sending data or request messages to it.

  7. User Account Management General Policy

    1. Application or enterprise user IDs shall be unique.

    2. System service accounts or application accounts shall not be disabled for lack of use.

    3. User / System IDs not required shall be removed from the system.

  8. The application shall not send authentication data.

    1. Sending data that is used for authentication decisions to the client side shall be limited as much as possible or not done at all.

    2. User identity shall be sufficiently verified during authentication to prevent spoofing.

  9. IP addresses shall not be used as authentication since they can be easily spoofed.

  10. Only approved session token technology shall be used.

    1. A web application that re-authenticates users based on session tokens shall accept only approved session token technology for this purpose.

    2. Session tokens shall not be stored in the URL as this is subject to replay attacks.

    3. The application's security policy shall define the authentication and access control parameters for the application. Authentication parameters include authentication credential details, number of allowed failed authentication attempts, credential expiration periods, etc. Access control parameters include the attributes (i.e., roles) required for users who will be allowed to access the application, and the ways in which each type of user (as defined by these attribute(s)) will be allowed to access the application (e.g., execute-only, modify, delete).

  11. Authentication request shall be initiated by the entity that wishes to be authenticated.

    1. For every session (or transaction) initiated by a user, an authentication chain of trust should be established and maintained that begins at the user’s client, extends to the portal, thence onward from the portal to any backend application servers, web servers, database servers, or web services with which the user is permitted to interact.

    2. A web application’s authentication service should not use basic or digest authentication. Instead, the authentication service should use form authentication, or cookie authentication with a temporary (one-time) encrypted cookie.

    3. An entity should never be authenticated based solely on its username or account name. The entity should always present an authentication credential.

  12. The application shall utilize approved strong authentication credentials.

    1. Authentication of the following types of entities shall be implemented based on an approved strong authentication credential:
      1. Remote users of private server applications/services.
      2. Users (local and remote) of private intelligence server applications/services.
      3. Users, such as administrators, who belong to privileged roles/perform privileged functions.
      4. Privileged roles (e.g., administration).

  13. Server/Web service that authenticates based on role or group authentication shall perform individual authentication first.

    1. A server application/web service that authenticates an entity based on a role or group authentication credential submitted by that entity must first ensure that the entity:
      1. Actually belongs to the role/group associated with the credential.
      2. Has already been individually authenticated by an authentication service trusted by the server/web service.

    2. The application shall not authenticate entities that have anonymous account names or account names commonly associated with default accounts on COTS or Operation Support Systems (OSS).

    3. The application’s authentication service shall not be implemented by client-side code. All authentication service code should be implemented on a server.

    4. The application’s authentication service shall not be implemented in an application level or system level scripting language.

    5. The account management service of the application shall enable the administrator to designate any text string he/she chooses as the name assigned to any user, group, or role account and shall require the administrator to choose a predefined unique name for each account. If the application’s default configuration includes default account name(s), the account management service shall require the administrator to change those default account name(s). The application shall continue to operate correctly regardless of any such changes.

    6. All pages involved with authentication process shall use HTTPS.

    7. Login failure shall not indicate whether username or password failed or that the account is locked.

10.8.6.5.3  (08-31-2010)
Audit and Accountability

  1. For general information on Audit and Accountability, see IRM 10.8.1, Information Technology (IT) Security, Policy and Guidance and IRM 10.8.3, Audit Logging Security Standards.

10.8.6.5.3.1  (08-31-2010)
General Audit Policy

  1. The application shall warn an administrator when the audit records are near full.

  2. The application shall audit records vulnerable to unauthorized deletion, modification, or disclosure.

  3. The application’s exception handling service shall enable core dumps to be turned off when the application is not undergoing testing. The exception handling service should be configured with core dump turned off when the application is deployed operationally.

  4. Logs shall be generated and viewed using techniques preventing log injections and preventing attackers from misleading administrators or covering traces of attack.

  5. Passwords shall not be written to the audit log/records.

  6. The system shall log important security-relevant events.

    1. The application should log all security relevant events in a format that can be easily incorporated into the system audit logs. The type and amount of information to be collected by the log should be consistent with the requirements of the overall audit policy for the system. Security logs should never store authentication or authorization credentials.

    2. The system shall log all security-relevant events associated with reading, creating, overwriting, deleting, or copying/replication of objects in the schema of databases, directories, repositories, and XML documents used by the application. Auditing should be able to be configured on and off by the administrator on a per-object basis.

  7. The application audit service shall log security events to read-only storage area and/transmitted securely over to a confidential, tamper-resistant network connection.

    1. Use of the application shall be monitored by an audit service by which all security-related application events can be either:
      1. Logged to an otherwise read-only storage location in the application host’s file system (with read-write access configurable only by the privileged security administrator role).
      2. Transmitted securely over a confidential, tamper-resistant network connection to a protected external audit collection facility.

    2. The audit service used by the application shall bind the username or process ID to the audit record of each event caused by the entity to which that username/process ID belongs.

  8. The audit service used by the application shall permit the administrator to select the events to be audited and the information to be captured about each event, with the caveat that logging of events and capturing of audit data required by policy should not be able to be turned off by the administrator.

  9. If the application’s audit service fails, the application shall notify the administrator and perform the administrator-configured recovery action.

    1. The application’s audit service shall include a tool that enables the administrator to view the application’s audit records and to run reports against them.

    2. The application’s exception handling service shall log all exceptions and failure events to an exception log. The exception handling service shall provide a tool through which the exception log is rendered human-readable by the privileged administrator role.

  10. See IRM 10.8.3, Information Technology (IT) Security, Audit Logging Security Standards, for additional auditing guidance.

10.8.6.5.3.2  (07-31-2007)
Security Functionality Verification

  1. Automated security failure notification shall be supported.

    1. The system shall employ automated mechanisms to provide notification when failures in security controls are detected.

10.8.6.5.4  (07-31-2007)
System and Communication Protection

  1. For general information on System and Communication Protection, see IRM 10.8.1.

10.8.6.6  (08-31-2010)
Risk Based Decision

  1. Requests to deviate from this policy shall be submitted in accordance with policy for Risk-Based Decisions (RBD) as defined in IRM 10.8.1.

  2. Use Form 14201, as described in the Risk Acceptance Request Standard Operating Procedure (SOP), available on the Enterprise FISMA Compliance SharePoint site via the Risk Acceptance Requests link at:http://mits.web.irs.gov/Cybersecurity/Divisions/SRM/Policy_Guidance/risk_acceptance.htm.

Exhibit 10.8.6-1 
Application Vulnerabilities

  1. Refer to IRM 2.5.3, Systems Development, Programming and Source Code Standards and the Application Development PMO Security's Final Secure Coding Guidelines document for common application vulnerabilities.

Exhibit 10.8.6-2 
Glossary

ACL - In computer security, an Access Control List (ACL) is a list of permissions attached to an object. The list specifies who or what is allowed to access the object and what operations are allowed to be performed on the object. In a typical ACL, each entry in the list specifies a subject and an operation: for example, the entry (Alice, delete) on the ACL for file XYZ gives Alice permission to delete file XYZ

API - An application programming interface (API) is a source code interface that a computer system or program library provides to support requests for services to be made of it by a computer program. An API differs from an application binary interface in that it is specified in terms of a programming language that can be compiled when an application is built, rather than an explicit low level description of how data is laid out in memory.

Application -The term application is a shorter form of application program. An application program is a program designed to perform a specific function directly for the user or, in some cases, for another application program. Examples of applications include word processors, database programs, Web browsers, development tools, drawing, paint, image editing programs, and communication programs. Applications use the services of the computer's operating system and other supporting applications. The formal requests and means of communicating with other programs that an application program uses is called the application program interface (API).

Automated vulnerability scanning - A vulnerability scanner is a computer program designed to search for and map systems for weaknesses in an application, computer or network. Typically (Step 1), the scanner will first look for active IP addresses, open ports, operating systems (OS) and any applications running. Next (Step 2), it may create a report or move to the next step. Next (Step 3), it will try to determine the patch level of the OS or applications. In this process the scanner can cause an exploit of the vulnerability such as crash the OS or application. Finally (Step 4), the scanner may attempt to exploit the vulnerability. Scanners may either be malicious or friendly. Friendly scanners usually stop at Step 2 and occasionally the Step 3, but never go to Step 4. Automated vulnerability scanning is a black box testing method.

Black box - In software development, a black box is a testing method in which the tester has no knowledge of the inner workings of the program being tested. The tester might know what is input and what the expected outcome is, but not how the results are achieved. A black box component is a compiled program that is protected from alteration by ensuring that a programmer can only access it through an exposed interface.

Blackhole list - A blackhole list, sometimes simply referred to as a blacklist, is the publication of a group of ISP addresses known to be sources of spam, a type of email more formally known as unsolicited commercial email (UCE). The goal of a blackhole list is to provide a list of IP addresses that a network can use to filter out undesirable traffic. After filtering, traffic coming or going to an IP address on the list simply disappears, as if it were swallowed by an astronomical black hole.

Blacklist – A list of characters or character representations (see Canonicalization) that are specifically forbidden as a part of data entry into an application’s data items or entry fields. Used in conjunction with a Whitelist of acceptable characters for a data entry field or item in order to prevent attacks that are triggered or carried out via data entry, particularly where data items are moved from the data plane to the control plane as part of a dynamic command construction operation.

Canonicalization - A process for converting data that has more than one possible representation into a standard, normal, or canonical form. In security usage, this is done to compare different representations for equivalence to whitelist and blacklist members, thus permitting or denying the use of the represented character in an entry field, data item, or by an application.

Code - In programming, code (noun) is a term used for both the statements written in a particular programming language - the source code, and a term for the source code after it has been processed by a compiler and made ready to run in the computer - the object code.

Configuration control - Process for controlling modifications to hardware, firmware, software, and documentation to ensure the information system is protected against improper modifications prior to, during, and after system implementation.

Configuration management - Process for controlling modifications to hardware, firmware, software, and documentation to ensure the information system is protected against improper modifications prior to, during, and after system implementation.

Cookie - A cookie is information that a Web site puts on your hard disk so that it can remember something about you at a later time. (More technically, it is information for future use that is stored by the server on the client side of a client/server communication.) Typically, a cookie records your preferences when using a particular site. Using the Web's Hypertext Transfer Protocol (HTTP), each request for a Web page is independent of all other requests. For this reason, the Web page server has no memory of what pages it has sent to a user previously or anything about your previous visits. A cookie is a mechanism that allows the server to store its own information about a user on the user's own computer. You can view the cookies that have been stored on your hard disk (although the content stored in each cookie may not make much sense to you). The location of the cookies depends on the browser. Internet Explorer stores each cookie as a separate file under a Windows subdirectory. Netscape stores all cookies in a single cookies.txt file. Opera stores them in a single cookies.dat file.

COTS -A COTS (commercial off-the-shelf) product is one that is used as-is. COTS products are designed to be easily installed and to inter-operate with existing system components. Almost all software bought by the average computer user fits into the COTS category: operating systems, office product suites, word processing, and email programs are among the myriad examples.

Database (DB) - A database is a collection of information that is organized so that it can easily be accessed, managed, and updated. In one view, databases can be classified according to types of content: bibliographic, full-text, numeric, and images.

DTD - A Document Type Definition (DTD) is a specific document defining and constraining definition or set of statements that follow the rules of the Standard Generalized Markup Language (SGML) or of the Extensible Markup Language (XML), a subset of SGML. A DTD is a specification that accompanies a document and identifies what the funny little codes (or markup) are that, in the case of a text document, separate paragraphs, identify topic headings, and so forth and how each is to be processed. By mailing a DTD with a document, any location that has a DTD reader (or SGML compiler) will be able to process the document and display or print it as intended. A single standard SGML compiler can then serve many different kinds of documents that use a range of different markup codes and related meanings. The compiler looks at the DTD and then prints or displays the document accordingly.

Encryption testing - Testing digital signatures and encryption to ensure that they are functioning properly. Encryption testing is a form of black box testing.

Fuzz testing - Fuzz testing or fuzzing is a software testing technique (black box) that provides random data (fuzz) to the inputs of a program. If the program fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted.

GET - Requests a representation of the specified resource. By far the most common method used on the Web today. Should not be used for operations that cause side-effects (using it for actions in web applications is a common misuse). See 'safe methods' below.

GOTS - A GOTS (government off-the-shelf) product is typically developed by the technical staff of the government agency for which it is created. It is sometimes developed by an external entity, but with funding and specification from the agency. Because agencies can directly control all aspects of GOTS products, these are generally preferred for government purposes.

HTML - HTML (Hypertext Markup Language) is the set of markup symbols or codes inserted in a file intended for display on a World Wide Web browser page. The markup tells the Web browser how to display a Web page's words and images for the user. Each individual markup code is referred to as an element (but many people also refer to it as a tag). Some elements come in pairs that indicate when some display effect is to begin and when it is to end.

HTTPS -HTTPS (Hypertext Transfer Protocol over Secure Socket Layer, or HTTP over SSL) is a Web protocol developed by Netscape and built into its browser that encrypts and decrypts user page requests as well as the pages that are returned by the Web server. HTTPS is really just the use of Netscape's Secure Socket Layer (SSL) as a sublayer under its regular HTTP application layering. (HTTPS uses port 443 instead of HTTP port 80 in its interactions with the lower layer, TCP/IP.) SSL uses a 40-bit key size for the RC4 stream encryption algorithm, which is considered an adequate degree of encryption for commercial exchange

Metacode - Metacode is a language that describes text & graphics.

Misuse (abuse) case - Misuse cases explore possible threats and bind them to specific functionalities of the application. Misuse cases target individual functionalities and detail the individual threats to that functionality. A misuse case is a use case from the point of view of an actor hostile to the system. Just like use cases, which concentrate on what the system should do, misuse cases concentrate on what the system should not do.

Mobile code - Software programs or parts of programs obtained from remote information systems, transmitted across a network, and executed on a local information system without explicit installation or execution by the recipient.

Penetration testing - A method of evaluating the security of a computer system or network by simulating an attack by a malicious user.

Post - Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updates of existing resources or both.

Program - In computing, a program is a specific set of ordered operations for a computer to perform. In the modern computer that John von Neumann outlined in 1945, the program contains a one-at-a-time sequence of instructions that the computer follows. Typically, the program is put into a storage area accessible to the computer. The computer gets one instruction and performs it and then gets the next instruction. The storage area or memory can also contain the data that the instruction operates on. (Note that a program is also a special kind of data that tells how to operate on application or user data.)

Protocol - In information technology, a protocol is the special set of rules that end points in a telecommunication connection use when they communicate. Protocols exist at several levels in a telecommunication connection. For example, there are protocols for the data interchange at the hardware device level and protocols for data interchange at the application program level. In the standard model known as Open Systems Interconnection (OSI), there are one or more protocols at each layer in the telecommunication exchange that both ends of the exchange must recognize and observe. Protocols are often described in an industry or international standard.

SAML- Security Assertion Markup Language (SAML) is an Extensible Markup Language (XML) standard that allows a user to log on once for affiliated but separate Web sites. SAML is designed for business-to-business (B2B) and business-to-consumer (B2C) transactions.

Security attack testing - Attack patterns are a group of rigorous methods for finding bugs or errors in code related to computer security. Attack patterns are often used for testing purposes and are very important for ensuring that potential vulnerabilities are prevented. The attack patterns themselves can be used to highlight areas which need to be considered for security hardening in a software application. They also provide, either physically or in reference, the common solution pattern for preventing the attack. Such a practice can be termed defensive coding patterns. Attack patterns define a series of repeatable steps that can be applied to simulate an attack against the security of a system. Security Attack Testing is a form of black box testing.

Security controls - The management, operational, and technical controls (i.e., safeguards or countermeasures) prescribed for an information system to protect the CIA of the system and its information.

Security testing tools - Source code analysis tools are automated software vulnerability detection tools which spot all potential flaws. They range from simple, noisy grep-like tools that look for potential vulnerabilities to full-fledged static-analysis tools that perform data flow analysis on the code under inspection.

Significant change - NIST SP 800-37 has:

A significant change is defined as a change that is likely to affect the security state of an information system. Significant changes to an information system may include for example: (i) installation of a new or upgraded operating system, middleware component, or application; (ii) modifications to system ports, protocols, or services; (iii) installation of a new or upgraded hardware platform; (iv) modifications to cryptographic modules or services; or (v) modifications to security controls. Examples of significant changes to the environment of operation may include for example: (i) moving to a new facility; (ii) adding new core missions or business functions; (iii) acquiring specific and credible threat information that the organization is being targeted by a threat source; or (iv) establishing new/modified laws, directives, policies, or regulations.

The footnote to this passage states:

The examples of changes listed above are only significant when they meet the threshold established in the definition of significant change (i.e., a change that is likely to affect the security state of the information system).

SOAP - SOAP (Simple Object Access Protocol) is a way for a program running in one kind of operating system (such as Windows 2000) to communicate with a program in the same or another kind of an operating system (such as Linux) by using the World Wide Web's Hypertext Transfer Protocol (HTTP)and its Extensible Markup Language (XML) as the mechanisms for information exchange. Since Web protocols are installed and available for use by all major operating system platforms, HTTP and XML provide an already at-hand solution to the problem of how programs running under different operating systems in a network can communicate with each other. SOAP specifies exactly how to encode an HTTP header and an XML file so that a program in one computer can call a program in another computer and pass it information. It also specifies how the called program can return a response.

SQL - SQL (Structured Query Language) is a standard interactive and programming language for getting information from and updating a database. Although SQL is both an ANSI and an ISO standard, many database products support SQL with proprietary extensions to the standard language. Queries take the form of a command language that lets you select, insert, update, find out the location of data, and so forth. There is also a programming interface

TLS - Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols which provide secure communications on the Internet for such things as web browsing, email, Internet faxing, instant messaging and other data transfers. There are slight differences between SSL 3.0 and TLS 1.0, but the protocol remains substantially the same. The term TLS as used here applies to both protocols unless clarified by context.

Threat modeling - Threat modeling is a procedure for optimizing network security by identifying objectives and vulnerabilities, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system. Threat modeling is an iterative process that consists of defining enterprise assets, identifying what each application does with respect to these assets, creating a security profile for each application, identifying potential threats, prioritizing potential threats, and documenting adverse events and the actions taken in each case.

Type declaration - A type declaration statement specifies the type, length, and attributes of objects and functions. Initial values can be assigned to objects.

Type checking - Type checking is the processes of identifying errors in a program based on explicitly or implicitly stated type information.

URI - To paraphrase the World Wide Web Consortium, Internet space is inhabited by many points of content. A URI (Uniform Resource Identifier) is the way you identify any of those points of content, whether it be a page of text, a video or sound clip, a still or animated image, or a program. The most common form of URI is the Web page address, which is a particular form or subset of URI called a Uniform Resource Locator (URL). A URI typically describes.

URL - A URL (Uniform Resource Locator, previously Universal Resource Locator) - usually pronounced by sounding out each letter, but sometimes pronounced "earl" , is the unique address for a file that is accessible on the Internet. A common way to get to a Web site is to enter the URL of its home page file in your Web browser's address line. However, any file within that Web site can also be specified with a URL. Such a file might be any Web (HTML) page other than the home page, an image file, or a program such as a common gateway interface application or Java applet. The URL contains the name of the protocol to be used to access the file resource, a domain name that identifies a specific computer on the Internet, and a pathname, a hierarchical description that specifies the location of a file in that computer.

White box - A white box or clear box is a device, program or system whose internal workings are well understood. White box testing, also called white box analysis, clear box testing or clear box analysis, is a strategy for software debugging in which the tester has excellent knowledge of how the program components interact and also is familiar with the details of its internal operation.

Whitelist A whitelist is a list of email addresses or domain names from which an email blocking program will allow messages to be received. Email blocking programs, also called a spam filters, are intended to prevent most unsolicited email messages (spam) from appearing in subscriber inboxes.

XHTML - XHTML (Extensible Hypertext Markup Language) is "a reformulation of HTML 4.0 as an application of the Extensible Markup Language (XML)." For readers unacquainted with either term, HTML is the set of codes (or markup language) that a writer puts into a document to make it displaye on the World Wide Web. HTML 4 is the current version of it. XML is a structured set of rules for how to define any kind of data to be shared on the Web. It's called an extensible markup language because anyone can invent a particular set of markup for a particular purpose and as long as everyone uses it (the writer and an application program at the receiver's end), it can be adapted and used for many purposes - including, as it happens, describing the appearance of a Web page. That being the case, it seemed desirable to reframe HTML in terms of XML. The result is XHTML, a particular application of XML for expressing Web pages.

XML - XML (Extensible Markup Language) is a flexible way to create common information formats and share both the format and the data on the World Wide Web, Intranets, and elsewhere. For example, computer makers might agree on a standard or common way to describe the information about a computer product (processor speed, memory size, and so forth) and then describe the product information format with XML. Such a standard way of describing data would enable a user to send an intelligent agent (a program) to each computer maker's Web site, gather data, and then make a valid comparison. XML can be used by any individual or group of individuals or companies that wants to share information in a consistent way.

XSS - Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications which allow code injection by malicious web users into the web pages viewed by other users. Examples of such code include HTML code and client-side scripts. An exploited cross-site scripting vulnerability can be used by attackers to bypass access controls such as the same origin policy.

Exhibit 10.8.6-3 
References

IRM
IRM 2.5.3, Systems Development, Programming and Source Code Standards.

IRM 10.8.1, Information Technology (IT), Security Policy and Guidance.

IRM 10.8.2, Information Technology Security Roles and Responsibilities.

IRM 10.8.3, Audit Logging Security Standards.

IRM 10.8.50, Service-wide Patch Management.

NIST
NIST Special Publication 140-2, Security Requirements for Cryptographic Modules -- May, 2001.
NIST Special Publication 500-224, Stable Implementation Agreements for Open System Environment, Version 8, Edition 1-- December, 1994.
NIST Special Publication 800-9, Good Security Practices for Electronic Commerce, Including Electronic Data Interchange-- December, 1993.
FIPS PUB 146-2, Profiles for Open Systems Inter-networking Technologies, May 1995.
FIPS PUB 183, Integration Definition for Function Modeling (IDEF0) -- December 1993.

OMB
Office of Management and Budget (OMB) Circular A-119 (revised), Federal Participation in the Development and Use of Voluntary Standards,--October 1993.

United Nations
UN/ECE/WP.4 - Recommendation No. 25 on the Use of the UN/ EDIFACT Standard -- September 1995.

Other Publications

Application Development Program Management Office (PMO) Security's Final Secure Coding Guidelines

Department of Homeland Security (DHS), Security in the Software Lifecycle - Making Software Development Processes and Software Produced by Them More Secure - August 2006.

IBM, Application Security Strategies Article II. Secure SDLC: Integrating security into your software development life cycle, March 23, 2006.

IEEE Security & Privacy, "A Process for Performing Security Code Reviews," vol. 4, no. 4, July/August 2006, pp. 74-79.

Information Assurance Technology Analysis Center (IATAC) Software Security Assurance State of the Art Report (SOAR) , July 31, 2007.

Ounce Labs, The Path to a Secure Application: A Source Code Security Review Check List, Ryan Berg, 2007.

Security in the Software Life Cycle, Joe Jarzombek , Department of Homeland Security, Karen Mercedes Goertzel, Booz Allen Hamilton, September 2006.

Software Assurance Forum for Excellence in Code (SafeCode), Software Assurance, An overview of Current Industry Best Practices, February 2008.

Exhibit 10.8.6-4 
Recommended Best Practices

The previous version of this IRM (July 31, 2007) had identified the recommendations listed in Exhibit 10.8.6-4, Additional Systems Development, Programming and Source Code Standards, for relocation to IRM 2.5.3, Systems Development, Programming and Source Code Standards. These recommendations might be incorporated into a future revision of IRM 2.5.3. Until that time, they can be found in the exhibit table and should be implemented where applicable.

 
Recommendations Applicable Area
Do not use unsafe application constructs. Developers shall not use C language constructs when there are C++ equivalents or replacements. For example, do not use the C functions malloc and free. Use new and delete instead. See IRS C ++ Programming Standards, Document Number 12384. C Programming
All non-standard, erroneous (i.e., syntax errors), and unnecessary tags shall be removed from Hypertext Markup Language (HTML) and Extensible Hypertext Markup Language (XHTML) code, that has been automatically generated by a web authoring tool.
a. Tags that are browser-dependent (e.g., tags that are intended to optimize the code for a particular browser) shall also be removed.
HTML/XML/Web Based Programming
Type Declaration and Type Checking shall declare types strictly and explicitly, including whether they should be signed or unsigned. Defines and TypeDefs
Input Validation for Bounds Checking – C and C++ programmers shall write their programs to explicitly perform bounds checking, and must avoid using unsafe C/C++ calls and functions. Buffer Overflows
Safe Alternatives to Dangerous Functions and Calls – Program Developer/Programmers shall specify realistic buffer sizes and implement the necessary input validations to ensure that input does not exceed those buffer sizes. Buffer Overflows
Program Developer/Programmers shall ensure programs are free of overflow vulnerabilities. Buffer Overflows
Disabling Stack Execution – Program Developer/Programmers shall avoid stack smashing vulnerabilities, by implementing non-executable stacks, which will prevent an attacker from being able to write and execute malicious code on the program stack. Buffer Overflows
Only files and folders required for systems functionality shall be copied to the production folder(s). General Programming
File upload(s) shall be restricted, to ensure that files can only be uploaded into intended locations and to ensure the file size and type are appropriate considering the available resources. Basic Principles of Data Controls
Debug binaries shall not be used in production environments. General Programming
The application shall not be designed or require the use of default configurations or otherwise insecure configurations of any developmental or non-developmental components, in the application itself or its environment. General Programming
The application shall allow for the disabling or removing of default account names, passwords, etc., used in the application or its environment shall be changed. General Programming
Every application process shall have a developer-defined time-out threshold. The threshold shall indicate the amount of real time during which that process can execute. If this threshold is reached by the executing process, the process shall stop running, clean up all resources (e.g., memory, cache) allocated to it by the host, and gracefully terminate. General Programming
Network protocols and communication services used by the application that default to unlimited connections shall be re-configured to allow only a finite number of simultaneous connections. General Programming
The application’s host operating system shall be configured to limit the processing resources (memory and processor time) allotted to the application’s processes. General Programming
The application’s exception handling service shall be configured to recognize all operating system imposed resource limitations. General Programming
The application host’s file system shall be configured to isolate, through partitioning into separate execution domains, all security processes within the application itself and all security processes invoked by the application in other execution environment components on the same physical host.
a. This partitioning shall be configured to ensure that the application’s security processes are kept strictly separate from its non-security processes and the application’s non-security processes are kept strictly separate from security processes in its execution environment.
b. The application shall not be designed in a way that requires it to run both security and non-security processes in the same execution domain.
General Programming
All graphic editors, office productivity software (e.g., word processors, spreadsheet programs), etc., shall be removed from the application’s code base and execution environment before operational deployment. General Programming
Access controls on all directories in which system libraries, privileged programs, configuration files, and security data owned or used by the application shall be configured to prevent read, write, or delete access by unauthorized entities or processes. General Programming
Access to sensitive directories should be configured as follows:
a. Executable(s) on server:
1. Execute, write, delete access by administrator role.
2. Execute-only access by roles assigned to authorized server application components.
3. No access by user role (user access must be gained through client or proxy agent).
b. Executable(s) on the client:
1. Execute, write, delete access by user and administrator roles (to enable patching/updating).
2. Execute-only access by roles assigned to authorized client application components.
c. Configuration and security files on server:
1. Read, write, delete access by administrator role.
2. Read-only access by roles assigned to authorized server application components.
3. No access by user role.
d. Configuration and security files on client:
1. Read, write, delete access by user and administrator roles (to enable updates)
2. Read-only access by role assigned to authorized client application components.
General Programming
All web server and application server default directories and files shall be renamed, unless the application absolutely cannot run without those specific directory/file names. Any directories or files that retain their default names shall have their access controls configured as restrictively as possible. General Programming
Web servers and application servers that enable automatic directory indexing shall have that feature turned off before operational deployment. General Programming
Email service shall be disabled. General Programming
Backup strategies shall be captured in the design phase to ensure confidentiality, integrity, and availability of data. For example, capture frequency of data backup media and access privileges to the backup and whether the backup be encrypted. General Programming
Security Code Reviews shall be performed on all code early in the implementation phase. Code Review
Code shall be prioritized based on the following:
a. Older Code.
b. Code that runs by default.
c. Code that runs in elevated context.
d. Anonymously accessible code.
e. Code listening on a globally accessible network interface.
f. Code written in C/C++/assembly language.
g. Code with a history of vulnerabilities.
h. Code that handles sensitive data.
i. Complex code.
j. Code that changes frequently.
k. Code with a high bug density.
l. Code that has high risk associated with it.
m. Data entry points.
Code Review
Code analysis tools shall be run before a code review. For more information on code analysis tools, refer to the System Information and Integrity section of this IRM. Code Review
Threat modeling shall be used to determine high risk code. Code Review
Higher priority code (based on the prioritization criteria) shall be reviewed first by a team of no more than four people consisting of the code author, a subject-matter expert (can not be the author), and a note taker. A code review schedule shall be made with no more than two hours of review at a time. Code Review
Iterative assessments and re-assessments shall be performed throughout the software’s lifetime. Code Review
HTML encoding shall be used when displaying text data from external sources in a browser, to prevent cross-site scripting (XSS). HTML/XML/Web Based Programming
Data from external sources shall be properly validated/manipulated before embedded in Structured Query Language (SQL) statements in order to prevent SQL injections. SQL Statements
Ensure large number input shall not cause unexpected results.
a. Numeric input shall be properly validated to ensure that large numbers will not be used in unexpected ways. For example, 5-digit numbers can be much higher than 99999 when entered with an exponent (e.g., 99e99).
Input Controls
Extensible Markup Language (XML) shall be validated against a Document Type Definition (DTD) or XML schema.
a. When parsing XML, the document shall be validated against a DTD or XML schema to prevent attackers from supplying malicious input.
b. Ensure dots and slashes or other special characters used in user entered paths shall not cause unexpected results.
c. Ensure server side code shall not rely on the client to perform input validation. The server application shall always validate any input it receives, regardless of whether that input was previously validated by a client.
d. Implement and enforce size restrictions on XML message(s); necessary to prevent memory and CPU exhaustion of the underlying system.
e. If a process in a medium or high robustness application cannot validate the input it has received from an external entity, that process shall:
1. Reject the input.
2. Return an error message to the entity that sent the input, reporting that the current transaction/session is being terminated due to an input error.
3. Terminate the transaction/session.
f. Any application function that accesses a memory buffer or array (e.g., to write data to that memory location), shall first check the size of the buffer or bounds of the array boundaries. If the function intends to write data to the buffer/array, and the size of the data exceeds the size of the buffer/array, the function should either reject (not write) the data or truncate the data to the size of the buffer/array before writing it.
g. The application shall reject any input containing HTML, XHTML, or XML received from an untrusted source (user or other entity).
h. An application process shall never accept, use, act upon, or copy to a database any input (data, parameter, or argument) it receives from an external entity or library without first validating the correctness of the properties of the received input. The process should suspend processing of any new input received during the same session, until it completes validation of the input it has already received.
HTML/XML/Web Based Programming
Input properties to be validated:
a. The following properties of input received by an application process shall be verified by that process before that process accepts/uses/acts upon the input:
1. Input formatted as expected.
2. Input contains only correct syntax.
3. Character strings of values supplied for variables are valid.
4. Size (in bytes or characters) of input falls within the expected bounds.
5. Numeric input falls within an acceptable range of values.
6. Input contains no numeric values that could cause a routine or calculation in the application to divide any number by zero.
7. Input contains no parameters whose source cannot be validated as matching the identity of the input source as expressed in the source’s session token.
8. Input cannot induce a buffer overflow.
9. Input contains no HTML, XHTML, or XML expressions.
10. Input contains no special characters, metacode, or metacharacters that have not been encoded (if encoding is permitted).
11. Input contains no direct queries or command strings in Structured Query Language (SQL) or any other procedural language.
12. Input contains no truncated pathname references.
13. Input contains no scripting language expressions (e.g., Perl, JavaScript).
14. Input contains no other unexpected or executable content or invalid values.
b) The application shall inform the user of the expected, acceptable characteristics of all data strings to be input by the user, e.g., via an HTML or XHTML form.
c) A server application should never rely on the client to perform input validation. The server application should always validate any input it receives, regardless of whether that input was previously validated by the client.
d) The application shall not act upon instructions or inputs stored in files that originate from untrusted sources without first validating the instructions/input. The application should trust only files that:
1. Are controlled solely by trusted users.
2. Cannot be written to/modified by untrusted users.
3. Do not reside in directories that can be write-accessed by untrusted users.
e) Input validation of data that contains active content, such as mobile code, shall not cause that active content to execute.
f) The application shall validate all directory pathnames, Uniform Resource Locators (URL)s, and Uniform Resource Identifiers (URI)s entered by users in order to flag:
1. Unrecognized or unsafe extensions (e.g., .exe).
2. Untrusted domains or directories which the application does allow clients to link to or download from (e.g., Iinternet domain names ending in .ru, .com). The application’s pathname validation function should be configured judiciously, balancing the desire to prevent downloads of potentially malicious files into the application environment against the users’ need to access certain types of files, including those with filename extensions such as .doc, .xls, .ppt, .rtf, .mpp, .vsd, .ps, .exe, and other extensions associated with files created by applications for which browser plugins are not readily available. The portal/server application should not allow any client to link to or download files from a directory or network location whose pathname contains a dubious domain name, root directory name, or filename extension. The server application should be configured to block dubious pathnames typed in by users or selected via saved bookmarks in order to prevent clients from linking to those untrusted sites and downloading potentially malicious files from them.
g) The application shall not make use of HTTP (unsecure) requests, except only when necessary and no other options are available. A risk assessment must address this issue.
h) The application shall not use hidden fields for sensitive or critical data.
i) The application shall validate the URL in any redirects accepted by the application.
j) The application canonicalize or normalize all user input (including all parts of the HTTPS Request - headers, query string, cookies, form fields, and hidden fields) prior to applying validation.
Input Controls
File upload(s) shall be restricted.
a. The following file upload mechanisms shall be enforced:
1. Maximum file size.
2. Restrict files to specific types.
3. Do not allow to specify target location.
4. Check file name to see if well formed.
5. Virus check uploaded file.
6. Ensure that the target location for uploaded files is outside the web/app server path.
b. The application shall not execute user input.
c. The application shall not allow user data to modify the intent of the database query.
d. A server’s application or web service’s input validation processes and exception handling service should be able to:
1. Detect, recognize, and resist flood attacks.
2. Identify the attack source.
3. Ensure that the application gracefully terminates all processing of input received from that source.
Basic Principles of Data Controls
Ensure Web sites are scanned for buffer overflow flaws in the used server products and custom web applications.
a. For the custom application code, all the code that accepts input from users shall be reviewed to ensure it provides appropriate input size checking.
b. Functions that do not check the size of the destination buffers, such as gets(), strcpy(), strcat(), printf() shall be avoided.
HTML/XML/Web Based Programming
Ensure that all format string functions are passed as static string which cannot be controlled by the user and that the proper number of arguments are always sent to that function as well. Do not use the %n operator in format strings. Basic Principles of Data Controls
Appropriate string management functions shall be used to prevent buffer overflows caused by conversion between multi-byte and Unicode strings. Buffer Overflows
A language or compiler that performs automatic bounds checking shall be used to prevent stack overflows. Basic Principles of Data Controls
An application process shall never accept or pass data blocks of invalid lengths, streams, or elements. Basic Principles of Data Controls
The application shall never write input data to an allocated memory buffer or cache if the buffer cannot first be validated to ensure that it is larger (in bytes) than the input data to be written. Buffer Overflows
Every data buffer allocated in the application shall be larger than the largest data element expected to be written to that buffer. Buffer Overflows
Unless the application can be guaranteed to perform correct input validation with buffer truncation, an absolute value or constant shall not be used to specify the size of any buffer used by that application. Buffer Overflows
The application, regardless of the programming language in which it is written, shall call only library routines and functions that are not known to be:
a. Susceptible to buffer overflows.
b. Vulnerable to other threats and attacks.
Basic Principles of Data Controls
Applications shall contain an exception handling service.
a. Regardless of the cause of the exception, the exception handling service should never place the application (including its executable(s) and data/resources) into an insecure state when attempting to handle the exception.
Testing & Debugging Code
The application/component shall not enter into an insecure state when the resource exhaustion occurs. Testing & Debugging Code
Error messages shall not reveal more details than necessary about the application. Testing & Debugging Code
Web applications shall have a default error page defined (at least) for 404 errors and 500 errors to prevent attackers from mining information from the application container's built-in error response. Testing & Debugging Code
Error messages returned to users shall not contain any information from which an attacker could infer the type or cause of the error, the configuration of any application or environment component, the host directory structure, the application's processing state or residual vulnerabilities, or any other information that might be useful to an attacker.
a. If it provides any information at all indicating the type of error, the error message shall include a difficult-to-decipher reference to a more detailed error message in the offline application documentation. The error message shall never include the reference to an informative error message associated with any commonly used commercial product.
Testing & Debugging Code
The application’s exception handling service shall provide the administrator with several configurable options for how the mechanisms will respond to a flagged exception. In addition, the application’s exception handling service shall be initially configured with the safest actions set as defaults. Resetting the exception handling service to perform its less safe options should require explicit re-configuration by the administrator.
a. The set of configuration options for exception handling shall include, at a minimum.
1. Termination of the whole application.
2. Termination of the process in which the exception occurred.
3. Termination of processing which the exception occurred plus termination of other administrator-selected process(es).
Testing & Debugging Code
The application’s exception handling service shall (on an exception by exception basis), enable the administrator to configure one or more of the following pre-termination actions to be performed by the exception handling service before it terminates a process or the entire application:
a. Notify user that termination is about to occur.
b. Notify administrator that termination is about to occur.
c. Perform automatic checkpoint restart (in transaction-oriented and database applications).
Testing & Debugging Code
The exception handling service shall be able to detect failure conditions in all execution environment components with which the application interoperates or communicates. The exception handling service shall ensure that a failure in any environment component will not cause the application to fail in an insecure state, nor cause any of the application’s executable or data files to become vulnerable to compromise.
a. When the exception handling service detects an environment component failure it shall:
1. Notify the administrator that the component has failed.
2. Gracefully terminate all application processes that depend on/interact with that component (or terminate the whole application), so as to ensure that the application and its resources remain in a secure state.
Testing & Debugging Code
If the mission criticality of the application requires it to continue operating after the exception handling service has detected a flagged exception condition or the unavailability of a critical resource, the application shall be designed to adjust its operation in a way that will enable it to continue processing in a restricted or degraded way (e.g., reducing resources, reducing functionality). It will continue operating until the exception/resource unavailability persists past a programmer-defined threshold, or otherwise deteriorates to a degree beyond which even degraded operation is no longer possible. Once the application reaches the point where it can no longer operate at all, the exception handling service should perform an orderly shutdown of the application that preserves the secure state of the application, its data and resources. Testing & Debugging Code
When an application process fails, the application’s exception handling service shall immediately notify the administrator of the failure. The exception handling service shall enable the administrator to pre-configure the way in which he/she will receive the notification, (at a minimum) via email, console alarm, and pager alert. Testing & Debugging Code
The application’s exception handling service checks all error conditions returned by external system calls, but shall not consume excessive resources trying to handle overly complex errors or too many simultaneous errors. Testing & Debugging Code
As part of its startup process after a failure, the application shall perform self-diagnostic consistency checks before restoring full operation to ensure that the following application properties and resources were not corrupted by the failure:
a. Call arguments
b. Basic state assumptions.
c. Access control permissions.
d. Security parameters.
e. Other critical parameters, data, and files.
Testing & Debugging Code
Recovery after a failure shall not cause or allow the application to re-initialize into an insecure state. Testing & Debugging Code
A server application or web service shall enable the administrator to set a threshold indicating the amount of time the server will wait for another entity to respond to output (data, web page) sent to it by the application. If the threshold expires, the server shall release its session lock for the session with the non-responding entity. Testing & Debugging Code
Before attempting to read or write to any file or directory, a server application/web service, first verify that the file/directory is present at its expected location in the host file system. If the file/directory is missing, the application/service shall:
a. Return an error message informing the user or requestor that access to the requested file/directory is not possible.
b. Gracefully terminate the process that received the request for access to the missing file/directory.
Testing & Debugging Code
The application shall not contain design or implementation errors or flaws that could cause any of its executing processes to erroneously delete or overwrite data, incorrectly assign or change access permissions, or otherwise impede data availability. Testing & Debugging Code
Transaction processing, database, and other transaction-oriented applications (including COTS and OSS applications) shall include a checkpoint restart mechanism that enables the application to ensure that, when a transaction recovers from an environmental (e.g., operating system, hardware) failure, it resumes processing at exactly the point in processing it had reached before the application failed.
a. Checkpoint restart shall also ensure that the data processed by the transaction is rolled back to the state/version that existed at the time the transaction failed.
Testing & Debugging Code
Server applications, web services, and clients shall not contain design or coding errors or flaws that could be exploited by a malicious entity to launch a successful denial of service (DoS) attack against the application. Testing & Debugging Code
Changes in the application’s processing state, including changes induced by errors or exceptions during initialization, abort, or failure, shall not cause the application to enter an insecure state. Testing & Debugging Code
The application’s exception handling service shall be configurable by the administrator to recognize all resource allocation limits imposed on the application by the host operating system; as to effectively handle exceptions caused by any attempt by the application to exceed its allocated resources. Testing & Debugging Code
The application shall be able to detect failure conditions in the execution environment components with which it interfaces and should discontinue its attempts to access or communicate with failed components. Testing & Debugging Code
The application shall check all exceptions and error codes from calls made to services and handle them appropriately. Testing & Debugging Code
Error messages shall not reveal implementation details (e.g., COTS components). Testing & Debugging Code
Attacks shall not be logged without additional protection or sanitization, as it can be executed while viewing the logs. Testing & Debugging Code
Security events shall be logged to separate files than error logs. Testing & Debugging Code
The application shall sanitize user information before using it in log messages. Testing & Debugging Code
In most cases, a standard error page shall be displayed in any error situation. The application shall trap all the system errors (such as missing configuration file, ODBC error, database not working, etc.) or application errors (such as invalid username/password) and redirect them to one standard error page. Ideally, the list of possible error situations and the corresponding error messages shall be identified and a generic error message for unknown or unidentified errors with no technical information shall be displayed to the user. Also, an entry in the logs shall be created with the time stamp and other relevant information. Testing & Debugging Code
An application that runs on a single-processor host shall be designed to be single-tasking and should not initiate a new task until the previous task has finished executing. General Programming
In an application written to run on a multiprocessor host, multitasking and multithreading shall not create conflicts among tasks/threads in their usage of system resources. For example, the memory and disk addresses for all tasks and threads should be synchronized to prevent such resource usage conflicts. General Programming
The application is designed and implemented to avoid timing and sequence errors such as race conditions, incorrect synchronization, and deadlocks. If the application is transaction-oriented (e.g., a database application), all transactions shall be designed to be atomic, and multiphase commits and hierarchical locking strategies shall be implemented to achieve asynchronous consistency within the application’s functions. General Programming
If the application updates a database, all database updates shall be implemented as discrete transactions. General Programming
Security testing of code units and individual components shall be performed to detect any implementation-specific vulnerabilities, such as race conditions, lack of randomness in cryptographic modules, buffer overflows, etc. Testing and Debugging Code
An application that is used to display the content of a file or database shall also enable the time/date stamp on that file/data object to be displayed. Date Fields
An application that is used to create, update, or modify files or database entries shall apply a time/date stamp to every file/database entry at the time of creation, and again each time that file/entry is updated/modified. Date Fields
External initialization of variables critical for system's functionality/stability shall not be allowed. HTML/XML/Web Based Programming
For applications using SSL (HTTPS), users shall not be able to access the same page with the same content using the equivalent HTTP URL. Removing the S from HTTPS shall not allow to bypass SSL. HTML/XML/Web Based Programming
A web application shall not transmit confidential data using HTTPS GET transactions. The HTTPS POST transactions sent over an SSL/TLS-encrypted connection can be used instead. HTML/XML/Web Based Programming
Sensitive data shall not be stored under Web Root. HTML/XML/Web Based Programming
Only non-persistent, encrypted cookies shall be used for storage and transmission of confidential data. Persistent cookies (even when encrypted) and unencrypted cookies (even when non-persistent) shall never be used for this purpose. HTML/XML/Web Based Programming
The application shall not respond to an invalid or unexpected input by revealing the file system/web server directory structure.
a) If a user inputs an invalid, truncated, or relative file system pathname, URL, or URI, the application shall not respond by listing the contents stored in the file system/web server directory at the same level as the invalid pathname, and must not otherwise reveal the file system/web server directory structure. Instead, the application should redirect the user to a default home or index page. Before redirecting the user to a default page, the application may display an uninformative error message.
HTML/XML/Web Based Programming
The system shall be configured to save the encrypted data in a different directory from any unencrypted data it writes/stores. HTML/XML/Web Based Programming
Client application(s) shall not store confidential data in any form, including hidden fields in HTML and XHTML forms.
a) All confidential data shall be stored on the server side and retrieved and forwarded to the client by the server on an as-required basis.
HTML/XML/Web Based Programming
All application administrative or privileged user data shall transmitted over a secure interface.
a) Even if other data is sent to/from the application via insecure (e.g., unencrypted) interfaces, all administrative data shall be sent to/from the application over a secure interface. The same is true of interfaces used for any other privileged role.
HTML/XML/Web Based Programming
All authentication data shall be encrypted before transmission.
a. User-names and cleartext authentication and authorization credentials, including passwords, Security Assertion Markup Language (SAML) assertions, password hints, session tokens/authentication cookies, etc., are encrypted before being transmitted between the user's system and the application. All sensitive information sent during an authentication exchange, including session tokens sent in the background, shall be encrypted before transmission.
HTML/XML/Web Based Programming
Secrets, such as passwords, SAML assertions, and sensitive information about the application itself, (e.g., known vulnerabilities, hidden file locations) shall not be stored in the application's source code. HTML/XML/Web Based Programming
Authentication credentials/CSPs used by the application’s authentication service shall not be hard-coded into any web pages, scripts, programmable function keys, other with user-viewable source code.
a) If the application contains any web pages, scripts, programmable function keys, or other components with user-viewable source code, the credentials/CSPs used by the application’s authentication service shall not be hard-coded into those components. Furthermore, the application shall not map any credential/CSP to a component with user-viewable source code.
HTML/XML/Web Based Programming
User-viewable source code shall not contain full pathnames.
a. Application components with user-viewable source code, and application error messages, shall not contain any full pathnames (e.g., URLs or URIs). Pathnames in user-viewable source code files should be relative, so they can be modified without having to rewrite the user-viewable source code. Error messages should not contain pathnames.
HTML/XML/Web Based Programming
The application shall not include any confidential data in any notification, error message, or redirect message it displays or returns to users. HTML/XML/Web Based Programming
All sensitive information that could be exploited by attackers is removed from the application’s user-viewable source code, including from code comments. All such information shall be stored in a separate file on the configuration management server, and should not be included in the user-viewable source code file on the deployed application’s host. HTML/XML/Web Based Programming
Connection strings and Database (DB) shall not be stored in cleartext in configuration files. Encryption shall be used and property/config files shall be protected with strict Access Control Lists (ACLs). HTML/XML/Web Based Programming
Sensitive information shall not be stored anywhere in response including cookies, hidden fields, URL, or headers. HTML/XML/Web Based Programming
The cleartext passwords shall not be hardcoded in the source code. HTML/XML/Web Based Programming
The application is designed to prevent exposure/disclosure of any data it receives over a secure connection or in encrypted form. HTML/XML/Web Based Programming
Sensitive information transmitted by the application to another entity shall be sent via a secure connection. HTML/XML/Web Based Programming
All cryptographic algorithms and functions used in the application shall be Federal Information Processing Standards (FIPS) 140-2 (or later) compliant. HTML/XML/Web Based Programming
If the application uses encryption as an access control method to prevent unauthorized access to data written by the application, the application shall be configured to save the encrypted data in a different directory from any unencrypted data it writes/stores (if any). In addition, the application’s host file system access controls must be configured to ensure strict separation between the directory containing the application’s encrypted data and the directory containing the application’s unencrypted data. HTML/XML/Web Based Programming
A client application shall not store any confidential data in any form, including in hidden fields in HTML and XHTML forms. All confidential data should be stored on a server, and retrieved and forwarded to the client by the server on an as-needed basis. HTML/XML/Web Based Programming
A server application/web service shall never return to the client any data other than or in addition to the data explicitly requested by the user/proxy agent. HTML/XML/Web Based Programming
The application shall not return confidential or high-integrity information in response to a request or other input received from an untrustworthy entity. HTML/XML/Web Based Programming
A web service that uses XML and Simple Object Access Protocol (SOAP) shall use standard WS-Security and XML Digital Signature protocols to sign SOAP and XML messages and documents. HTML/XML/Web Based Programming
The application shall not store in hidden HTML or XHTML form fields on the client any high integrity data or parameter data describing any HTML/XHTML form fields. Instead, such data should always be stored on the server.
a. The source of any update to low-integrity and non-confidential data stored in a hidden HTML/XHTML form field on the client should be validated before the update is allowed to proceed. If the source cannot be validated, the update shall be rejected.
HTML/XML/Web Based Programming
An application that is used to access (read or write) a database shall not enable unauthorized users to modify the references to those entries/records. HTML/XML/Web Based Programming
The application shall not trust data it receives over an untrustworthy channel unless the data has been digitally signed by the creator/sender before transmission and the application is able to validate that digital signature. HTML/XML/Web Based Programming
An application function that updates data (e.g., in a database) shall not cause the data to be reparsed incorrectly, introduce errors into the data, or otherwise corrupt the data during the update process. HTML/XML/Web Based Programming
The SSL and TLS technologies used in web applications and web services shall comply fully with the current approved specifications for those protocols. HTML/XML/Web Based Programming
Application configurations and security data, application executable(s), and data created by the user shall not be stored on host's file system and shall not share the same directory.
a. The following items should be stored in different directories in the application host’s file system and should not share the same directory:
1. Application’s configuration and security data.
2. Application executable(s).
3. Data created by users (including proxy agents).
b. The access controls on each directory—discretionary and mandatory—are configured to prevent any entity from gaining access to the contents of any directory in any way (read, write, delete, or execute) that exceeds the privileges of that entity or its assigned role.
HTML/XML/Web Based Programming
The application’s account management mechanism shall not allow the administrator to assign more than one user to the same individual user account or to assign the same username to more than one account. General Programming
It shall not be possible to set the debug mode based on a hidden field in the request. General Programming
Data critical for the process flow shall not be stored in hidden fields or other places that can be controlled from client side using local proxies. General Programming
All application work flows shall be correct, and it shall not be possible to bypass any process within a defined workflow. General Programming
The application design shall not allow any two critically interdependent sequential operations or processes to ever be interrupted by an unrelated operation or process. General Programming
Only absolute minimum privileges shall be assigned that are required to perform authorized functions in accordance with IRM 10.8.1. General Programming
The privileges assigned to any entity (human or process) shall not exceed the absolute minimum privileges needed by that entity to perform its authorized functions. General Programming
Passwords shall not be stored in cleartext. General Programming
Authentication technology shall be implemented based on published open standards, such as X.509, SSL, SAML.
a) Unless no standard technology is available to implement a certain application security function, the application shall be designed to use and interact with standard rather than vendor-proprietary security technology.
b) The application’s password management service shall:
1. Enable the administrator to assign passwords to user accounts upon account creation.
2. Require each user to change the password assigned to his/her account immediately after the first time he/she logs in using the administrator-assigned password, and before granting the user access to the application’s functions and resources.
3. Enable the user to change his/her own password on demand as frequently as he/she desires but not frequently enough to garner minimum age requirement useless.
4. Require the user to change his/her password at least once every 90 days or less.
5. Ensure that all passwords selected by users or assigned by administrators conform with the password strength criteria.
c) The application’s password management service shall ensure that:
1. Users have write permissions to their own passwords only (i.e., can change only the password assigned to their own accounts).
2. Administrators have write access to all passwords.
3. Neither users nor administrators have read permissions to any passwords, including their own.
d) The application’s password management service shall be configured so that application-level passwords are not readable or writable from the host operating system. A client application must not allow the user to map his/her password to any of the client host’s function keys.
e) The application’s authentication service shall be configurable to issue an alarm to the administrator whenever a user exceeds the maximum number of authentication attempts allowed for his/her account or role.
f) Backend server applications and provider web services shall be able to obtain from the portal the authenticated identity of each user who interacts, directly or indirectly, with that backend server/provider service. The backend shall use that identity as the basis for determining the user’s access rights to the information/service requested by the user.
g) Users shall be provided information that will allow them to detect both failed and successful attempts to access their account. Such last successful logon and failed attempt shall record time and date.
General Programming
The application shall adequately log security-relevant events in accordance with IRM 10.8.3, Audit Logging Security Standards. General Programming
Refer to IRM 10.8.6, Secure Application Development, for addition general auditing guidance. General Programming
Before shutting down, the system shall delete/erase all temporary objects it created during its execution except for cache created for authentication. Basic Principles of Data Controls
The application shall clear all temporary data from memory (cache) and all temporary files, stored session tokens, etc., upon termination. The application should clear such temporary data more frequently to avoid risk of unauthorized disclosure. Before any process in the application terminates, that process shall delete/erase all temporary files, cache, data, and other objects it created during its execution. Similarly, before shutting down, the application as a whole must delete/erase all temporary objects it created during its execution. Basic Principles of Data Controls
The application shall not contain processes that create temporary data or files or create temporary copies of data or files unless those temporary objects are immediately purged from memory when the process that created them is terminated. Basic Principles of Data Controls
When application shutdown is invoked, before shutting down the application, undo any state changes that occurred during its execution to return the application executable and its resources to their pre-execution secure state and normal operational mode. Basic Principles of Data Controls
Errors during clearing shall be handled in a way that ensures objects are not reused without clearing. Basic Principles of Data Controls
Browser cache directives shall be used for sensitive pages. Basic Principles of Data Controls
Temporary files shall be protected when created and removed when no longer required. Basic Principles of Data Controls
A set of DB connections shall be maintained to bypass the cost of opening new connections for each DB query. Basic Principles of Data Controls
The length of the session identifier shall be at least 128 bits. A shorter session identifier leaves the application open to brute-force session guessing attacks. HTML/XML/Web Based Programming
When authenticating a user, any existing session identifier shall be invalidated to prevent attackers from stealing authenticated sessions. HTML/XML/Web Based Programming
Persistent or unencrypted session tokens (e.g., cookies) shall not be used for authentication.
a. Cookie-based authentication in a web application must be implemented using temporary (one-time) encrypted session tokens. Neither persistent cookies (even if encrypted) nor unencrypted cookies (even if they are one-time) may be used.
HTML/XML/Web Based Programming
If the application is session-oriented, the administrator shall be able to define the maximum duration for all sessions that required user authentication or in which a secure client-server interface (e.g., HTTPS) is used. Before initiating a new session the user must be re-authenticated. The application should resume its interaction with the user at the point at which the session expired. Session termination should not cause the application to terminate any background processes underway at the time the session expires. HTML/XML/Web Based Programming
Session tokens, cookies, and other sensitive session information shall not be stored in the browser's history files. HTML/XML/Web Based Programming
Every exchange of authentication data or other critical security parameters (CSPs) between entities shall be performed over a trusted path (i.e., an encrypted link) between the entity sending the authentication data and the entity receiving the authentication data. In web applications, the cryptography used to encrypt authentication exchanges must be at least 128-bit SSL. HTML/XML/Web Based Programming
A web application shall require the user to be re-authenticated every time the user initiates a new session, after an administrator-configured session time out and whenever the user needs to be directly authenticated by a backend server. HTML/XML/Web Based Programming
The session token used to re-authenticate a web application user shall be stored by the browser in a non-replayable form so that:
a. The user is unable to repeatedly resubmit the same token in order to avoid having to explicitly re-authenticate after a session timeout.
b. an unauthorized user or attacker cannot spoof a valid user by capturing and submitting the user’s session token in order to fool the portal/web server into believing the unauthorized user/attacker is the valid user to whom the token belongs.
HTML/XML/Web Based Programming
If the application permits an entity to initiate multiple simultaneous sessions, the application’s session management service shall enable the administrator to configure, for each individual account, group account, and role the maximum number of simultaneous sessions that can be active for by that account/role. The session management service should also prevent entities and processes that reach their maximum number of permitted sessions (whether configured for the entity/process itself, or for the group or role to which it belongs) from initiating a new session until one of the entity’s/process’s active sessions has been terminated. HTML/XML/Web Based Programming
Use session cookies generated by the environment, a custom session management shall not be used. HTML/XML/Web Based Programming
Session should be shared with the client but shall remain on the server. HTML/XML/Web Based Programming
Upon login/logout or moving in/out of SSL the old session shall be invalidated and create a new session. HTML/XML/Web Based Programming
The application shall not use URL rewriting. HTML/XML/Web Based Programming
Sessions shall have an inactivity timeout and absolute timeout after which users must re-authenticate. HTML/XML/Web Based Programming
Users shall be provided an option on every page to logout. HTML/XML/Web Based Programming
Session identifiers shall be stored in a secure encrypted cookie and expire when the session is invalidated or the browser is closed.
a. The application shall force the browser to be closed when the user logs out or session is invalidated.
b. Implementation details shall be captured in the design phase and documented in the design document.
HTML/XML/Web Based Programming
Code shall meet organizational and industry standards, conform to a consistent style guideline (code format), and shall be well documented. General Programming
All data shall be kept in configuration files instead of hard coding. Basic Principles of Data Control
The application shall not disclose implementation details, including code, stack traces, class names, functions, database schemas, parameters, developer names, or company. Basic Principles of Data Control
The application shall be designed so that all security parameters can be reset via a security configuration file(s). Security parameters shall not be hardcoded into the application so that the application code has to be rewritten in order to change any such parameter. Basic Principles of Data Control
All dead code and debug code (including back doors designed for testing) shall be removed before the application gets deployed. Conditions that are always false/true and functions that are never called should be removed. The code may not have been maintained along with the rest of the program or may contain information useful for designing attacks. The expression may also be indicative of a bug earlier in the method. Basic Principles of Data Control
The system shall separate interface services from information storage and management services.
a. The system shall physically or logically separate user interface services (e.g., public web pages) from information storage and management services (e.g., database management).
b. Integrity checks or checksums shall be performed to validate transmitted messages before parsing to ensure the data has not been corrupted in transmission.
c. Undocumented functions/features of the programming language/tool/APIs shall not be utilized since such functions may not be supported (or may cause unexpected results).
d. Obsolete UI features shall be removed or updated.
e. Wildcards used in user entered paths shall not cause unexpected results.
f. Protection against single message XML DoS attacks, including jumbo payloads, recursive elements, metatags, coercive parsing and public key DoS attacks shall be implemented.
g. The system shall minimize the amount of non-security functions included within the isolation boundary containing security functions.
h. The system shall reject messages from unauthorized sources.
i. If extensibility features are to be included in the application, they shall not be exploitable by an attacker.
j. Only recognized good programming practices shall be used. For example (but not limited to:
1. Avoid the use of unsafe language constructs; if necessary, replace unsafe calls with safe alternatives.
2. Ensure that all input from entities external to the application (human users or external processes) are adequately validated before being accepted by the application (this will help avoid buffer overflows, SQL and other command injections, cross-site scripting, etc.).
3. Write code that is simple and easily traceable (such code is easier to review and maintain).
k. No developer backdoors , debug interfaces, or unauthorized access paths shall be present in the application software.
l. Every application process, including every security process, shall have only one entry point and the absolute minimum possible number of exit points needed for the application to function correctly. The application code as a whole should have only one entry point and one exit point for each of its interfaces with external entities.
m. Each application component shall be self-contained and atomic, so that it can be disabled without terminating the operation of any other component.
n. The application shall not include any components (i.e., functions, processes, scripts, script mappings, or services) that are not:
1. Expressly invoked during the application’s execution.
2. Explicitly described in the application’s requirements specification. The code for any unused custom-developed or OSS components should be removed from the application’s code base before compilation. Any binary non-developmental components should be strictly configured (when the application is integrated/composed or installed) to be non-operational. The application’s host file system access controls should be configured to prevent execute access by any entity (user or software process) to any non-operational application component installed on the host.
o. The application’s security software modules shall be small, simple, and if at all possible perform only a single function per module. Complex functions should be accomplished through coordination and interaction between single-function modules. The application should not contain any large, complex, multi-function security modules.
p. The links between secure programs shall be static rather than dynamic, to protect the secure programs from being compromised if the host’s dynamic link library (DLL) mechanism is attacked.
q. Processes within the application program shall make no calls (local or remote) to other processes, libraries, resources, or entities, either within the application program itself, in other programs, or in the execution environment, if the called object cannot first be determined to be [1] present and [2] permitted to be addressed by the calling process.
r. The application shall be designed and implemented so as to prevent users and other entities from bypassing the application’s security controls in order to gain direct access to any component or resource in the application’s host file system directory or execution environment.
s. The application code shall not include any arithmetic errors, such as divide-by-zero errors, off-by-one counting errors, or missing negations.
t. The code that implements the application’s security functions shall not be dispersed widely throughout the application’s code listing, but should instead be centralized in a single, contiguous location in the code listing when possible.
u. The application code shall not include any GoTo statements that obscure control flows within the program.
v. The application shall be programmed to use only secure data types.
w. If the programming language in which an application component is written does not cause variables to be automatically initialized when they are declared, that component shall explicitly initialize all declared variables to valid, safe values.
x. The application code shall not include escape codes that directly invoke hardware device or device emulation functions.
y. The application code shall not contain server-side includes or other escapes by which a user or application process could gain direct access to the system shell, command line, or execute arbitrary system commands.
z. Aliases, pointers, links, caches, and other objects referenced by the application shall be named consistently throughout the application code base.
aa. Application processes shall use only APIs intended for use by software processes. Application processes should not use APIs intended for use by human users.
ab. The application shall be designed and configured to operate under the constraints of the execution environment when the environment components are configured for deployment. The application shall contain as few environment-specific dependencies as possible, and shall require no execution environment services or permissions that are not known to be available in the locked down environment.
ac. The application shall refer to a file by its filename only the first time it accesses/invokes that file. All subsequent references to that file by the application should be made by referencing a file handle or other file identifier.
ad. Application processes shall not trust environment variables passed to them directly by the operating system or other execution environment components. Instead, the application shall be designed and implemented so that its processes accept only arguments in environment parameters passed to them by the application program functions that are specifically designed to provide such variables.
ae. All developer accounts (including FTP, Telnet, and SSH accounts), backup files, temporary files, debug files, compilers, linkers, debuggers, third-party text editors, script editors and interfaces, web authoring tools, and other development tools, debug and test flags, high-risk services (e.g., FTP and Telnet), and other files or programs associated with development shall be removed from the application’s code base and execution environment before operational deployment.
af. Custom-developed and open source code shall be compiled with debug options turned off.
ag. The application’s runtime environment shall include only software libraries, routines, and other resources that are explicitly called by the application. All other objects should be removed from the runtime environment before application deployment.
ah. Fields shall be declared private instead of public. Accessors to them should be provided to limit their accessibility.
ai. Methods shall be declared private unless there is a good and documented reason to do otherwise. Non-private methods must protect themselves, because they may receive tainted data.
aj. Avoid using static field variables. Such variables are attached to the class (not class instances), and classes can be located by any other class. As a result, static field variables can be found by any other class, making them much more difficult to secure.
ak. Never return a mutable object to potentially malicious code (since the code may decide to change it). Note that arrays are mutable (even if the array contents are not). Do not return a reference to an internal array with sensitive data.
al. Never store user given mutable objects (including arrays of objects) directly. The secure code should validate the object and change the data when the secure code needs to use the data. Clone arrays before saving them internally, however, beware of user-written cloning routines.
Basic Principles of Data Control
Do not assume objects have been initialized; there are several ways to allocate uninitialized objects. Basic Principles of Data Control
All classes and methods should be declared final.
a. Make everything final whenever possible. If a class or method is non-final, an attacker could extend it in a dangerous and unforeseen way. Note that finalization causes a loss of extensibility, in exchange for security.
Basic Principles of Data Control
Application security shall not rely on package scope.
a. A few classes, such as java.lang, are closed by default. Otherwise, assume Java classes are not closed. Thus, an attacker could introduce a new class with a used package and use this new class to access any objects and data.
Basic Principles of Data Control
Inner classes shall not be used.
a. When inner classes are translated into byte codes, the inner class is translated into a class accessible to any class in the package. The enclosing class's private fields silently become non-private to permit access by the inner class.
Basic Principles of Data Control
Classes shall be made uncloneable.
a. Java's object-cloning mechanism allows an attacker to instantiate a class without running any of its constructors. To make a class uncloneable, define its method.
Basic Principles of Data Control
Classes shall be made unserializeable.
a. Serialization allows attackers to view the internal state of objects, even private portions.
b. Even if a class is not serializeable, it may still be deserializeable. An attacker can create a sequence of bytes that deserialize an instance of a class with values of the attacker's choosing. In other words, deserialization is a kind of public constructor, allowing an attacker to choose the object's state.
Basic Principles of Data Control
Classes shall not be compared by name. Attackers can define classes with identical names causing problems by granting these duplicated classes undesirable privileges. Basic Principles of Data Control
The application shall not send email messages that include executable code. Basic Principles of Data Control
The application shall not transmit unsigned mobile code. Basic Principles of Data Control
The application shall not transmit mobile code that attempts to access local operating system resources or establish network connections to servers other than the application server. Basic Principles of Data Control
The application shall not execute mobile code without requiring and validating digital signatures. Basic Principles of Data Control
The application shall utilize a type of mobile code for which there is an established policy. Basic Principles of Data Control
If the application’s executable image has an integrity mechanism (hash or code signature) affixed to it, the operating system must include a facility that can validate the integrity mechanism before executing the application. Failure to validate the mechanism shall cause the file system to prevent application execution and notify the administrator that the application’s executable has been corrupted and should be reinstalled from a clean copy. Basic Principles of Data Control
If an application entity retrieves or receives data or executable code (e.g., mobile code, mobile agent) that has an integrity mechanism (hash, digital signature, code signature) affixed to it, the entity shall ensure that the integrity mechanism can be validated before using the data or executing the code. If the integrity mechanism cannot be validated, the application shall delete the data/code and audit the deletion. In the case of a client application that cannot validate a code signature, the application should report to the user its inability to execute the code. Basic Principles of Data Control
If the access controls in the file system or DBMS which protects data created, updated, or overwritten by the application are not sufficiently robust to protect the integrity of the data, the application shall apply a digital signature (preferred) or hash (less desirable) to each data object it creates, updates, or overwrites using a agency-approved technology. Basic Principles of Data Control
If the application transmits mobile code or other executable code over a high level of concern network and the integrity protection of that network is inadequately robust, the application shall enable the code to be digitally signed using a code signing technology in conjunction with a PKI code signing certificate, prior to transmission. Basic Principles of Data Control
A client and server application shall include an API or plugin providing it direct access to its host’s anti-virus tool, or to its own embedded anti-virus capability. Via this API, the client application shall ensure that every file it receives over the network is scanned and, if necessary, sanitized by the anti-virus tool, before that file is opened or executed by the application. Basic Principles of Data Control
An application that forwards mobile code or mobile agents to other entities, before forwarding a mobile executable, shall verify that the executable includes no pointers to remotely stored malicious code. Basic Principles of Data Control
All user input shall be checked for presence of malicious code before that input is accepted by the application. Input that contains malicious code shall either be sanitized or rejected by the application. Basic Principles of Data Control
Mobile agents and other mobile code programs shall be written using the lowest risk mobile code technology that can satisfy the functional requirements of the program. Preferably mobile code shall be digitally signed. Basic Principles of Data Control
Untrusted mobile code shall not be executed. Basic Principles of Data Control

Exhibit 10.8.6-5 
Checklists for Additional Systems Development, Programming, and Source Code Standards

The previous version of this IRM (July 31, 2007) had identified the checklists listed in Exhibit 10.8.6-5, Additional Systems Development, Programming and Source Code Standards Checklist, for relocation to IRM 2.5.3, Systems Development, Programming and Source Code Standards. These checklists might be incorporated into a future revision of IRM 2.5.3. Until that time, they can be found in the exhibit table and should be implemented where applicable.

General Checks
Check Category
0x2713 All buffer management functions are safe from buffer overruns.
0x2713 Review Strsafe.h for potential use.
0x2713 Review the latest update of dangerous or outlawed functions.
0x2713 All DACLs well formed and good — not NULL or Everyone (Full Control).
0x2713 No hard-coded password fields (should be at least PWLEN + 1 for NULL, PWLEN is defined in LMCons.h, and is 256).
0x2713 No references to any internal resources (server names, user names) in code.
0x2713 Security support provider calls not hardcoded to NTLM (use Negotiate).
0x2713 Temporary file names are unpredictable.
0x2713 Calls to CreateProcess[AsUser] do not have NULL as first argument.
0x2713 Unauthenticated connections cannot consume large resources
0x2713 Error messages do not give too much info to an attacker.
0x2713 Highly privileged processes are scrutinized by more than one person—does the process require elevated privileges?
0x2713 Security sensitive code is commented appropriately.
0x2713 No decisions made on the name of files.
0x2713 Check that file requests are not for devices (e.g., COM1, PRN).
0x2713 No shared or writable PE segments.
0x2713 No user data written to HKLM in the registry.
0x2713 No user data written to c:\program files.
0x2713 No resources opened for GENERIC_ALL, when lesser permissions will suffice.
0x2713 Application allows binding to appropriate IP address, rather than 0 or INADDR_ANY.
0x2713 Exported APIs with byte count versus word count documented.
0x2713 Impersonation function return values checked.
0x2713 For every impersonation, there is a revert.
0x2713 Service code does not create windows and is marked interactive.
Remote Procedure Call (RPC) Checks
Check Category
0x2713 Interface Definition Language (IDL) file(s) compiled with /robust.
0x2713 [range] used if appropriate.
0x2713 RPC connections are authenticated.
0x2713 Use of packet privacy and integrity investigated .
0x2713 Strict context handles used.
0x2713 Context handles != access checks.
0x2713 NULL context handles correctly handled.
0x2713 Access is determined by security callbacks.
0x2713 Implications of multiple RPC servers in a single process investigated.
Web and Database-Specific Checks
Check Category
0x2713 No Web page issues output based on unfiltered output.
0x2713 No string concatenation for SQL statements.
0x2713 No connections to SQL Server as sa.
0x2713 No ISAPI applications running in process with IIS 5.
0x2713 Force a codepage in all Web pages.
0x2713 No use of eval function with untrusted input in server pages.
0x2713 No reliance on REFERER header.
0x2713 Any client-side access and validity checks are performed on the server also.
ActiveX, COM, and DCOM Checks
Check Category
0x2713 All ActiveX controls, marked as safe for scripting, are indeed safe.
0x2713 SiteLock use investigated.
Crypto and Secret Management Checks
Check Category
0x2713 No embedded secret data (EXE, DLL, registry, files, etc.).
0x2713 Secret data is secured appropriately.
0x2713 Calls to memset/ZeroMemory on private data are not optimized away. If they are, replace with SecureZeroMemory .
0x2713 No home-developed crypto code—use CryptoAPI or System.Security.Cryptography.
0x2713 Random number generation reviewed.
0x2713 Password generation is random.
0x2713 RC4 code does not reuse an encryption key.
0x2713 RC4-encrypted data has integrity checking.
0x2713 No weak crypto (256-bit versus 128-bit).
Managed Code Checks
Check Category
0x2713 FXCop has no security complaints.
0x2713 No sensitive data in XML or configuration files.
0x2713 Classes are marked final, if appropriate.
0x2713 Inheritance demands on classes, if appropriate.
0x2713 All assemblies are strong-named.
0x2713 Assemblies use RequireMinimum to define the must-have grant set.
0x2713 Assemblies use RequestRefuse to reject specific permissions
0x2713 Assemblies use RequestOptional to outline optional permissions that may be required.
0x2713 Assemblies that allow partial trust are thoroughly reviewed and have a valid partial-trust scenario.
0x2713 Demand appropriate permissions
0x2713 Assert is followed by RevertAssert to keep time of asserted permission small.
0x2713 Code that denies access based on a filename is carefully checked.
0x2713 Assert trumps calls to PermitOnly and Deny further up the stack. Check code that attempts to operate otherwise.
0x2713 LinkDemand thoroughly audited for correctness. Are link demands really required.
0x2713 No stack trace provided to untrusted users.
0x2713 SuppressUnmanagedCodeSecurityAttribute used with caution.
0x2713 Managed wrappers to unmanaged code checked for correctness.
Top Ten Vulnerabilities in Developed Application Code
Check   Category
0x2713 Unvalidated Input Information from web requests is not validated before being used by a web application. Attackers can use these flaws to attack backend components through a web application.
0x2713 Broken Access Control Restrictions on what authenticated users are allowed to do are not properly enforced. Attackers can exploit these flaws to access other users’ accounts, view sensitive files, or use unauthorized functions.
0x2713 Broken Authentication and Session Management Account credentials and session tokens are not properly protected. Attackers that can compromise passwords, keys, session cookies, or other tokens can defeat authentication restrictions and assume other users’ identities.
0x2713 Cross Site Scripting (XSS) Flaws The web application can be used as a mechanism to transport an attack to an end user’s browser. A successful attack can disclose the end user’s session token, attack the local machine, or spoof content to fool the user.
0x2713 Buffer Overflows Web application components in some languages that do not properly validate input can be crashed and, in some cases, used to take control of a process. These components can include Common Gateway Interface (CGI), libraries, drivers, and web application server components.
0x2713 Injection Flaws Web applications pass parameters when they access external systems or the local operating system. If an attacker can embed malicious commands in these parameters, the external system may execute those commands on behalf of the web application.
0x2713 Improper Error Handling Error conditions that occur during normal operation are not handled properly. If an attacker can cause errors to occur that the web application does not handle, they can gain detailed system information, deny service, cause security mechanisms to fail, or crash the server.
0x2713 Insecure Storage Web applications frequently use cryptographic functions to protect information and credentials. These functions and the code to integrate them have proven difficult to code properly, frequently resulting in weak protection.
0x2713 Denial of Service Attackers can consume web application resources to a point where other legitimate users can no longer access or use the application. Attackers can also lock users out of their accounts or even cause the entire application to fail.
0x2713 Insecure Configuration Management Having a strong server configuration standard is critical to a secure web application. These servers have many configuration options that affect security and are not secure out of the box.
Some Common Application Security Vulnerabilities
Check   Category
0x2713 Session hijacking A hacker will claim the identity of another user in the system.
0x2713 Command Injection (e.g., SQL Injection) A hacker will modify input causing a database to return other users’ data.
0x2713 Cross Site Scripting (XSS) A hacker will reflect malicious scripts off a web server to be executed in another user’s browser to steal their session, redirect them to a malicious site, steal sensitive user data, or deface the web page.
0x2713 Buffer Overflows A hacker will overflow a memory buffer or the stack, causing the system to crash or to load and execute malicious code, thereby taking over the machine.
0x2713 Denial of Service A hacker will cause individual users or the entire system the inability to operate.

More Internal Revenue Manual