13.5.1 TAS Balanced Performance Measurement System 13.5.1.1 Program Scope 13.5.1.1.1 Background 13.5.1.1.2 Authority 13.5.1.1.3 Responsibilities 13.5.1.1.4 Program Objectives 13.5.1.1.5 Acronyms 13.5.1.2 TAS Employee Satisfaction Measure 13.5.1.2.1 Employee Satisfaction - Use and Limitations of Survey Information 13.5.1.2.2 Employee Satisfaction - Roles and Responsibilities - Business Assessment (BA) Director 13.5.1.2.3 Employee Satisfaction - Roles and Responsibilities - TAS Senior Managers 13.5.1.2.4 Employee Satisfaction - Roles and Responsibilities - TAS Managers 13.5.1.2.5 Employee Satisfaction - Roles and Responsibilities - TAS Employees 13.5.1.3 TAS Customer Satisfaction Measure 13.5.1.3.1 Case Advocacy Customer Satisfaction Measure 13.5.1.3.1.1 Case Advocacy Customer Satisfaction - Use and Limitations of Survey Information 13.5.1.3.1.2 Case Advocacy Customer Satisfaction - Roles and Responsibilities - BA Director 13.5.1.3.1.3 Case Advocacy Customer Satisfaction - Roles and Responsibilities - EDCA 13.5.1.3.1.4 Case Advocacy Customer Satisfaction - Roles and Responsibilities - EDCA ITS 13.5.1.3.1.5 Case Advocacy Customer Satisfaction - Roles and Responsibilities - DEDCA 13.5.1.3.1.6 Case Advocacy Customer Satisfaction - Roles and Responsibilities - LTA 13.5.1.3.1.7 Case Advocacy Customer Satisfaction - Roles and Responsibilities - TAGM 13.5.1.3.2 Systemic Advocacy Customer Satisfaction Measure 13.5.1.3.2.1 Systemic Advocacy Customer Satisfaction - Use and Limitations of Survey Information 13.5.1.4 TAS Business Results Measures 13.5.1.4.1 TAS Quality Measures 13.5.1.4.1.1 Case Advocacy Quality Measures 13.5.1.4.1.2 Case Advocacy Quality Attributes 13.5.1.4.1.3 Case Advocacy Quality - Use and Limitations of Review Information 13.5.1.4.1.4 Case Advocacy Quality Monthly Review Sample 13.5.1.4.1.5 Case Advocacy Quality Dialogue Process 13.5.1.4.1.6 Case Advocacy Quality - Roles and Responsibilities - QRP 13.5.1.4.1.7 Case Quality - Roles and Responsibilities - EDCA 13.5.1.4.1.8 Case Quality - Roles and Responsibilities - EDCA ITS 13.5.1.4.1.9 Case Advocacy - Quality Roles and Responsibilities - DEDCA 13.5.1.4.1.10 Case Quality Advocacy - Roles and Responsibilities - LTA 13.5.1.4.1.11 Case Quality Advocacy - Roles and Responsibilities - TAGM 13.5.1.4.2 TAS Systemic Advocacy (SA) Quality Measures 13.5.1.4.2.1 Systemic Advocacy (SA) Quality Attributes 13.5.1.4.2.2 Sharing and Using TAS Systemic Advocacy (SA) Quality Review Results 13.5.1.4.2.3 TAS Systemic Advocacy (SA) Dialogue Process 13.5.1.4.2.3.1 TAS Systemic Advocacy Dialogue Process and Timeframes 13.5.1.4.2.4 Systemic Advocacy Quality - Roles and Responsibilities - QRP Director 13.5.1.4.2.5 Systemic Advocacy Quality - Roles and Responsibilities - EDSA 13.5.1.5 Using Diagnostic Tools in TAS Exhibit 13.5.1-1 Case Advocacy National Quality Sample Size Part 13. Taxpayer Advocate Service Chapter 5. TAS Balanced Measures Section 1. TAS Balanced Performance Measurement System 13.5.1 TAS Balanced Performance Measurement System Manual Transmittal September 29, 2020 Purpose (1) This transmits a complete text and Table of Contents for IRM 13.5.1, Taxpayer Advocate Service, Balanced Measures. Material Changes (1) IRM 13.5.1.3.1.1 (7) is updated to correct grammar. (2) IRM 13.5.1.4.1.6 (1)d is updated to remove the requirement to publish quality results in BOE since SharePoint is primarily used. (3) IRM 13.5.1.4.1.10 (2)c is updated to clarify the dialogue process. (4) IRM 13.5.1.4.2.1 (2) is updated to change categories to focus areas and update the narratives. The note is also updated to change the location where the SA Quality Attributes can be found. (5) Exhibit 13.5.1-1 is updated to correct grammar in the first paragraph for clarification. Effect on Other Documents Supersedes IRM 13.5.1 dated September 5, 2019 Audience All Taxpayer Advocate Service employees Effective Date (09-29-2020) Erin M. Collins 13.5.1.1 (09-05-2019) Program Scope This section provides an overview of the Balanced Performance Measurement System for TAS and outlines how TAS uses balanced measures to monitor, measure, and improve organizational performance. This section supplements IRS guidance in IRM 1.5.1, Managing Statistics in a Balanced Measurement System, The IRS Balanced Performance Measurement System. 13.5.1.1.1 (09-05-2019) Background In fiscal year 2000, TAS developed a system of balanced measures to assist in measuring and improving organizational performance. TAS's Balanced Performance Measurement System includes the following components: Employee Satisfaction Customer Satisfaction Business Results (Quality and Quantity) 13.5.1.1.2 (09-05-2019) Authority Internal Revenue Code (IRC) §7803 established the Office of the National Taxpayer Advocate to assist with resolving problems with the Internal Revenue Service (IRS), identify areas in which taxpayers have problems dealing with the IRS, propose changes in administrative practices of IRS and to identify potential legislative changes to mitigate problems. 13.5.1.1.3 (09-05-2019) Responsibilities TAS managers are responsible for using balanced measures data to monitor, measure, and improve organizational performance. 13.5.1.1.4 (09-05-2019) Program Objectives TAS uses balanced performance measures data to: assess program effectiveness and service delivery; understand why measured data has changed; and determine what actions were taken or could be taken to influence results. Caution must be exercised when sharing numeric targets and performance results in order to avoid driving unintended consequence. The performance of any one unit should not be used as a standard by which the performance of any other unit is evaluated. Each unit has unique factors, specific tax issues to address and differences in the types of taxpayers served. The numerical results achieved for any measure will never directly equate to the evaluation of an individual. Additional information about the appropriate use of measures, including the definition of record of tax enforcement results (ROTERs), setting targets, use of measures in evaluations, etc., is in IRM 1.5.1, Managing Statistics in a Balanced Measurement System, the IRS Balanced Performance Measurement System. All managers must ensure strict adherence to IRM guidance on the appropriate use and application of balanced measures. 13.5.1.1.5 (09-05-2019) Acronyms The following table contains a list of acronyms used throughout this IRM Acronym Definition AP Advocacy Projects BA Business Assessment BM Business Modernization BOE Business Objects Enterprise CIPSEA Confidential Information Protection and Statistical Efficiency Act CQR Case Quality Review DEDCA Deputy Executive Director of Case Advocacy EDCA Executive Director of Case Advocacy EDCA ITS Executive Director of Case Advocacy Intake and Technical Support II Immediate Interventions IRC Internal Revenue Code IRS Internal Revenue Service LTA Local Taxpayer Advocate QRP Quality Review Program QSS Quality Sample Selection ROTER Record of Tax Results Enforcement SA Systemic Advocacy SAED Strategy Assessment & Employee Development SAMS Systemic Advocacy Management System SOI Statistics of Income TAGM Taxpayer Advocate Group Manager 13.5.1.2 (09-05-2019) TAS Employee Satisfaction Measure The employee satisfaction measure is a numerical rating of the employees' perception of management practices, organizational barriers, and overall work environment that effect employees efforts to do a good job. Employee satisfaction is a key component of employee engagement, which is the degree of employees' motivation, commitment, and involvement in the mission of the organization. The goal of the employee satisfaction component is to measure, among other factors bearing upon employee satisfaction, the quality of supervision and the adequacy of training and support services. Employee satisfaction is measured through an annual servicewide survey administered to all TAS employees. The survey provides employees with the opportunity to provide confidential information regarding their satisfaction in important areas such as: Leadership policies and practices; Work environment; Rewards and recognition for professional accomplishment and personal contributions to achieving organizational mission; Opportunity for professional development and growth; and Opportunity to contribute to achieving organizational mission. Survey results are received at the national, area, office and where applicable at the workgroup level. To protect employee anonymity, results are received only if the minimum number of respondents is met. Survey results should be used by all levels of the organization to make improvements that address employees' concerns and increase employees' level of engagement and satisfaction. TAS must consider and address employee satisfaction in organizational planning, budgeting, and review activities. 13.5.1.2.1 (09-05-2019) Employee Satisfaction - Use and Limitations of Survey Information Each TAS manager receives an employee survey result report to share with their workgroup. Survey results are available only if a minimum of 10 responses were received. If the minimum is not met, the manager receives a report reflecting higher level results. Example: A Case Advocate group had less than 10 responses. Responses from the employees rolled up to the next (LTA) level. The LTA level had a total of 15 responses and a results report was available. The TAGM received a copy of the LTA level results to share with the employees. The LTA also receives a copy of the LTA level results to share with his/her direct reports. National Office will have access to information collected from the employee satisfaction survey at a national, area and office level, if available. Areas will only have access to information nationwide, their own area and offices under their chain of command. Offices will only have access to information for nationwide, their area and their office. Survey results of any one office or workgroup should not be used as a standard by which any other unit is evaluated as there are inherent differences among workgroups and offices. Survey results should be used in coordination with feedback received from other sources such as discussions with employees, town hall meetings, and elevated issues, to identify and address employees' concerns and make improvements that will increase employee satisfaction. 13.5.1.2.2 (09-05-2019) Employee Satisfaction - Roles and Responsibilities - Business Assessment (BA) Director The BA Director is responsible for the following nationwide activities: Serving as TAS's primary Point of Contact for IRS's Human Capitol Office, which leads the Servicewide Employee Engagement; Leading TAS Employee Engagement Coordinators to ensure program requirements for administering the survey are met; Ensuring workgroups are properly reflected in the database used to administer the survey. This ensures every manager receives their workgroup codes and result report; Analyzing nationwide survey results and other data, providing recommendations for improvements and collaborating with stakeholders to implement improvements; Developing and implementing an annual nationwide employee engagement communication plan; Developing and maintaining an annual nationwide employee engagement action plan that includes activities to address and increase employee satisfaction; and Adhering to confidentiality rules governed by the Confidential Information Protection and Statistical Efficiency Act (CIPSEA). 13.5.1.2.3 (09-05-2019) Employee Satisfaction - Roles and Responsibilities - TAS Senior Managers TAS senior managers are responsible for activities shown in 13.5.1.2.4, Employee Satisfaction - Roles and Responsibilities - TAS Managers, for all workgroups within their office or department. In addition, TAS senior managers should conduct ongoing discussions with their subordinate managers to share best practices and make improvements to address employees' concerns and increase employee satisfaction. 13.5.1.2.4 (09-05-2019) Employee Satisfaction - Roles and Responsibilities - TAS Managers TAS managers’ annual performance plan includes employee satisfaction. TAS managers are responsible for using survey results and employee feedback to develop personal commitments to incorporate activities to promote employee satisfaction in their daily operations and interactions with employees. TAS managers are also responsible for annually reviewing results from the employee survey and conducting annual meetings with their workgroup to: Engage employees in meaningful dialogue to identify and overcome barriers that impact employees' ability to perform their jobs effectively and increase their job satisfaction; Recognize the accomplishments of the workgroup and its members; Identify areas of strength to build on improvement initiatives to address employees' concerns; and Develop and document actionable improvement initiatives. Following the annual meeting, TAS managers are responsible for: Implementing workgroup's identified improvement activities that are within the workgroup's control; Elevating improvement initiatives that are beyond the workgroup's control but within the organization's control; Following up on elevated workgroup recommendations; and Monitoring implemented initiatives to assess if the desired outcome is achieved and make any adjustments, if appropriate. Year-round, TAS managers should incorporate activities and discussions with employees to address employees' concerns and together develop improvements that will address concerns, improve work processes and increase employee satisfaction. Managers will provide sufficient time and resources for all personnel who perform duties related to the employee satisfaction program. 13.5.1.2.5 (09-05-2019) Employee Satisfaction - Roles and Responsibilities - TAS Employees Employees are encouraged to support the employee satisfaction survey process by completing the confidential surveys and answering all items candidly and honestly. Employees are also encouraged to actively participate in the results meetings to discuss employee survey results and concerns and provide recommendations to increase employee satisfaction. Employee engagement and satisfaction is a year-round commitment and partnership for both the employee and manager. Employees are encouraged to raise concerns and recommendations for improvement throughout the year with their manager so the manager can have the opportunity to address those concerns. 13.5.1.3 (09-05-2019) TAS Customer Satisfaction Measure TAS measures its customer satisfaction for both Case Advocacy and Systemic Advocacy. Measures for Case Advocacy and Systemic Advocacy are discussed in greater detail in subsequent IRM sections. 13.5.1.3.1 (09-05-2019) Case Advocacy Customer Satisfaction Measure TAS uses a paper survey to measure the customer satisfaction of those taxpayers who were part of the case resolution process. A number of these taxpayers with closed cases are randomly selected on a monthly basis to complete the survey. The Business Assessment (BA) unit is responsible for the development and operation of the survey process. This includes working with TAS Research in developing the survey and sample plan, compiling the data, and providing comprehensive reports. The methodology for sampling includes a process to ensure respondents cannot be identified or associated with their responses. A statistically valid sampling plan for the level of customer satisfaction TAS intends to measure is prepared by the TAS Research Unit. It is used to determine the number of randomly selected participants for the survey. The level of customer satisfaction measurement is always at the national level, however, may be extended to the area or office level as needed. TAS uses the survey results to assess its performance in various aspects, such as: Was TAS responsive to its customers' needs regarding timeliness, accuracy, fairness, and the resolution of the problem? Did your advocate listen to you? Were you satisfied with how your advocate explained your rights as they pertain to your case? Did TAS help the taxpayer understand their rights as a taxpayer? The goal for the Case Advocacy Customer Satisfaction Measure is to measure among other things, whether its customers (taxpayers or their representatives) believed they received courteous, timely, and professional treatment by the TAS personnel with whom they dealt. 13.5.1.3.1.1 (09-22-2020) Case Advocacy Customer Satisfaction - Use and Limitations of Survey Information TAS uses the information collected from the survey to conduct additional analysis and identify strategies to improve customer satisfaction, enhance our communications, and reach the best possible outcome for taxpayers. Survey reports will not identify individual TAS employees nor the taxpayers or practitioners with whom TAS interacted. For this survey, the written survey materials assure external customers their responses are confidential and guarantee their privacy by combining their responses with those of other taxpayers and by reporting only statistical totals. TAS is required to provide a privacy act notice to taxpayers which states TAS is required to follow confidentiality protections required by the Privacy Act and/or Internal Revenue Code section 6103. TAS will never use survey information to identify an individual employee or to evaluate the performance of an individual employee. All identifying information such as name, address, case file number, and phone number are removed before customer satisfaction data is compiled. BA and TAS Research will provide an end-of-year report at the national level from the customer satisfaction survey results. The National office has access to all information collected from the customer satisfaction survey regardless of the level it is collected at - national, area or LTA office level. Any data collected below the national level will be shared as follows: Areas will only have access to information for nationwide, their own area and the LTA offices under their chain of command. LTA offices will only have access to information for nationwide, their area and their own office. All levels of the organization should use the information collected from the survey to conduct analysis, explore best practices, and develop plans to improve customer satisfaction. Note: Customer satisfaction results are only one measurement of program performance and must be balanced with other measures and indicators to evaluate the overall success of TAS advocacy and to develop plans for improvement. Survey results of any one office or workgroup should not be used as a standard by which any other unit is evaluated because of inherent differences among offices and workgroups. Customer satisfaction results cannot be used to evaluate any employee or to impose or suggest goals for any employee. 13.5.1.3.1.2 (09-05-2019) Case Advocacy Customer Satisfaction - Roles and Responsibilities - BA Director The BA Director is responsible for the national customer satisfaction survey and related activities including: Collaborating with the Executive Director of Case Advocacy (EDCA) to develop the specifications of the annual customer satisfaction survey plan; Consulting with EDCA in developing a survey instrument that will provide actionable information to drive customer service improvements; Collaborating with EDCA to identify organizational training needs, suggest strategic actions and participate in studies to improve customer service; Procuring, administering, and overseeing the survey process and delivery of periodic reports that provide a basis for TAS's customer service improvement efforts; Recording survey result data in Business Objects Enterprise (BOE); and Analyzing nationwide survey results and other data, providing recommendations for improvements and collaborating with stakeholders to implement improvements. 13.5.1.3.1.3 (09-05-2019) Case Advocacy Customer Satisfaction - Roles and Responsibilities - EDCA EDCA is a principal management authority for aligning TAS's organizational actions with customer's expectations. EDCA is responsible for the following activities: Setting performance goals at the appropriate level and taking into account the balance of available resources and operational conditions; Coordinating with the TAS Director of Employee Support and Development, and Deputy Executive Directors of Case Advocacy (DEDCA) to meet training needs identified from the customer satisfaction data; Ensuring customer survey results are available throughout TAS's area and LTA offices; Evaluating actions taken at all organizational levels in response to customer satisfaction reports and data analysis; and Collaborating with TAS BA to develop nationwide strategies to improve customer satisfaction. 13.5.1.3.1.4 (09-05-2019) Case Advocacy Customer Satisfaction - Roles and Responsibilities - EDCA ITS The EDCA ITS is a principal management authority for aligning TAS’s organizational actions with customer’s expectations. The EDCA ITS is responsible for the following activities: Coordinating with the TAS Director of Employee Support and Development, and DEDCA to meet training needs identified from the customer satisfaction data, and Maintaining an efficient workload intake and delivery system that promotes achievement of the balanced measures and TAS objectives. 13.5.1.3.1.5 (09-05-2019) Case Advocacy Customer Satisfaction - Roles and Responsibilities - DEDCA The DEDCAs are responsible for the customer satisfaction program for the LTA offices within their area. DEDCAs and LTAs are collectively responsible for analyzing the data using process management techniques and for engaging employees through their representative organizations in identifying local initiatives to improve customer satisfaction. DEDCAs will use the detailed data analysis provided with the BA prepared reports and other analysis and provide guidance to LTAs that supplements the annual reports and drives organizational improvement activities. DEDCAs will periodically evaluate the impact of improvement action plans and implement corrections, as appropriate. 13.5.1.3.1.6 (09-05-2019) Case Advocacy Customer Satisfaction - Roles and Responsibilities - LTA LTAs are responsible for the customer satisfaction program for their office. The LTA is responsible for: Using the customer satisfaction survey data, along with the other balanced measures and annual Program Letter, and to engage employees and recommend actionable suggestions to improve TAS’s ability to identify and respond to taxpayers concerns. Actionable suggestions could be, but are not limited to, procedural changes within the office or at the national level initiated through a Systemic Advocacy Management System (SAMS) request, recommendations for training, equipment, etc. Monitoring processes and customer satisfaction survey data and any other information available to determine if improvements have had the desired impact and make adjustments as needed to achieve desired results. 13.5.1.3.1.7 (09-05-2019) Case Advocacy Customer Satisfaction - Roles and Responsibilities - TAGM TAGMs are responsible for promoting customer satisfaction program awareness at the group level. The TAGM is responsible for: Understanding customers’ needs and expectations in order to support the LTA in developing improvement initiatives. Effectively communicating customer needs and expectations to their employees to implement improvement initiatives. Monitoring customer satisfaction and acting on results. All feedback should be used to facilitate continuous improvement in day-to-day operations. 13.5.1.3.2 (09-05-2019) Systemic Advocacy Customer Satisfaction Measure The goal of the Systemic Advocacy customer satisfaction survey is to measure the level of satisfaction of its internal customers (IRS employees) who submitted issues to the Systemic Advocacy Management System (SAMS). Submitters from outside the IRS are not surveyed. 13.5.1.3.2.1 (09-05-2019) Systemic Advocacy Customer Satisfaction - Use and Limitations of Survey Information Systemic Advocacy uses the information from the survey to gauge customer satisfaction and identify possible enhancements that may improve satisfaction. Systemic Advocacy does not collect any data that identifies the person who responded to the survey. 13.5.1.4 (09-05-2019) TAS Business Results Measures Business results measures include numerical scores determined under the quantity or output and quality or efficiency measures at an operational level. Quantity or output measures consist of outcome-neutral production and resource data such as the number of cases closed and inventory information. Quality measures are derived from TAS's Quality Review Program (QRP) and are discussed in greater detail in the subsequent sections. Efficiency measures consist of data used to assess the quality and volume of work completed. The goal of business results measures is to assess its performance in achieving its overall mission and strategic goals. This objective involves more than reaching a target. Business results measures are one of three components of the balanced measurement system. Before taking actions to improve business results, the customer satisfaction and employee satisfaction components must be considered and addressed in order to carry out TAS's programs and functions successfully. 13.5.1.4.1 (09-05-2019) TAS Quality Measures Quality measures are numeric indicators of the extent to which completed work meets prescribed standards - TAS quality attributes. TAS measures quality of work completed by Case Advocacy and Systemic Advocacy through its specifically dedicated staff in QRP. Case Advocacy and Systemic Advocacy quality measures, management processes used to measure, using measurement information, and roles and responsibilities are discussed in greater detail in the subsequent IRM sections. 13.5.1.4.1.1 (09-05-2019) Case Advocacy Quality Measures TAS Case Advocacy quality measures are numerical scores indicating the extent to which TAS casework meets the prescribed quality attributes. The attributes measure whether the casework actions correctly followed Internal Revenue Manual procedures and other official case processing guidance such as Interim Guidance Memorandums. These results are indicators of quality and are used to improve TAS's advocacy efforts. QRP provides quality results at the national, area, and LTA office level. The results are based on QRP's review of the randomly selected closed cases from every LTA office each month. The quality results are a product review and based on the case in its entirety regardless if a case was partially worked and transferred from one LTA office to another. 13.5.1.4.1.2 (09-05-2019) Case Advocacy Quality Attributes TAS's Case Quality Attributes makeup the overall quality. The quality attributes measure TAS's effectiveness in key aspects such as advocacy, communication with taxpayers, and adherence to procedural requirements. 13.5.1.4.1.3 (09-05-2019) Case Advocacy Quality - Use and Limitations of Review Information National Office will have access to quality results information at a national, area and LTA office level. Areas will have access to information for nationwide, their own area and the LTA offices under their chain of command. LTA offices will have access to information for nationwide, their area and their own office. TAS quality results are based on stratified random samples at the LTA office level and are only statistically valid at the LTA office, area and national level. They are not statistically valid at an individual case advocate level, office group level, or one or more Primary Core Issue Codes level. Statistically valid overall quality scores have an associated precision margin. When quoting the quality estimate and the precision margin is present, the precision margin for accuracy should also be quoted. Example: An 85 percent quality estimate with a +/- 5 percent precision margin means if TAS reviewed 100 percent of the closures, there is 90 percent confidence the “real” quality score would fall somewhere between 80 percent and 90 percent. Quality scores may be shared with all TAS employees in balance with other measures, with the clear purpose of sharing, and not done is such a way as to imply individual or group targets. As part of the monthly review results, QRP provides the LTA and designated analyst with individual case quality review results. These results may not be shared with TAGMs, lead case advocates, case advocates, intake advocates or other similar positions in LTA offices. However, managers and designated analysts may discuss with employees the merits and issues of a particular case that was reviewed but emphasis must be on the techniques for advocating more effectively for taxpayers and not on the quality score. The identity of the employee who worked the case should not be revealed and employees should not be asked to defend why they worked cases in a particular way. All levels of the organization should use the quality result information to conduct analysis, explore best practices and develop plans to increase TAS’s effectiveness in advocating for taxpayers and case processing. Note: Case quality results are only one measurement of program performance and must be balanced with other measures and indicators to evaluate the overall success of TAS advocacy and develop plans for improvement. Quality results of any one LTA office should not be used as a standard by which any other LTA office unit is evaluated as there are inherent differences among LTA offices. Quality results may not be used to evaluate any employee or to impose or suggest goals for any employee. 13.5.1.4.1.4 (09-05-2019) Case Advocacy Quality Monthly Review Sample Each month, QRP reviews a sample of randomly selected closed cases from every LTA office to measure the extent to which casework meets the prescribed quality attributes. Cases are randomly selected through TAMIS for the monthly sample and are accessible in the Quality Sample Selection (QSS) report. The QSS report lists the randomly selected cases, which LTA offices must provide the QRP for review. The QSS report only selects TAS cases eligible for the quality sample, which includes closed criteria 1 through 9 cases (regular and reopen), including Congressional, Senate Finance Committee, Tax Forum cases (when assigned to a TAS office after the close of the forum) and excludes Special Case Code, F1, Tax Forum Event (Non-CQR) cases. To ensure an eligible case is not inadvertently excluded from the random sampling, the QSS report was designed to use the TAMIS Closed Date and not the TAS Closed Date. Therefore, a TAS case with a TAS Closed Date in one month and a TAMIS Closed Date in a subsequent month will be in the subsequent month’s quality sample pool for possible random selection. QRP must approve the replacement of any cases from the QSS report and the approval must be documented on the QSS report. QRP will approve the use of alternate cases only under the following circumstances: The case is a Tax Forum case and not correctly coded with Special Case Code, F1. A clerical error caused the erroneous closure and the case’s documentation supports the clerical error. Extenuating circumstances where cases would not properly reflect a sample of TAS's normal work and DNTA approves exclusion for cases meeting set criteria. To ensure the selection of the correct original or reopen case, the QSS report also includes the reopen sequence number. If a reopened case is selected for review, LTA offices may keep the original case file and provide QRP with photocopies of the entire case file’s contents. Employees should not know when their cases have been selected for quality review. Therefore, the charge out information section in TAMIS should not be used when cases are selected for quality review. Instead, LTA offices may elect to charge out their quality sample to their LTA or Quality Area Analyst. If the total number of cases closed for the month is less than the required sample size, then the LTA office must send all closed cases for quality review. If the LTA is unable to access their QSS report, the LTA office should contact their designated Area Analyst. See QRP’s SharePoint site for detailed information about the monthly sample size methodology and shipping cases to QRP. 13.5.1.4.1.5 (09-05-2019) Case Advocacy Quality Dialogue Process TAS established a dialogue process to enable LTA offices to request reconsideration of an error identified during the case quality review. If the error is reversed, QRP revises the quality scores for the LTA office, area office and national score. The dialogue process is also a useful tool to identify: Improvements needed in procedural guidance; Areas for improvement in case processing and advocacy; Training needs; and Systemic issues. Additional information about the dialogue process is posted to TAS Quality Review Program’s SharePoint site. 13.5.1.4.1.6 (09-22-2020) Case Advocacy Quality - Roles and Responsibilities - QRP The QRP Director has overall responsibility for the QRP and oversees the following activities: Ensuring QRP releases the monthly quality results; Maintaining and revising, as appropriate, the TAS quality review database with support from TAS Business Modernization (BM); Creating and maintaining monthly and cumulative quality reports at the National, the Area and the LTA levels with support from BM; Publishing the quality results data and notifying the LTA and area designees of the reports availability; Evaluating the monthly case quality review sample size and making adjustments when appropriate; Responding to dialogue requests and revising results to reflect reversed quality errors; and Analyzing nationwide results, providing recommendations for improvements and collaborating with stakeholders to implement improvements. 13.5.1.4.1.7 (09-05-2019) Case Quality - Roles and Responsibilities - EDCA EDCA is TAS’s principal management authority for aligning TAS’s organizational actions and goals with customer expectations by: Proposing national, area and office-level performance goals, taking into account the balance of available resources and operational conditions; Determining resources required for TAS’s Area and LTA offices to effectively manage the quality process; Coordinating with the QRP Director, DEDCAs, and Employee Support and Development Director to meet employee training needs identified through the quality review data; Ensuring the quality results are available to Area and LTA offices to improve quality; and Evaluating the actions taken at all organizational levels in response to quality reports and data analyses. 13.5.1.4.1.8 (09-05-2019) Case Quality - Roles and Responsibilities - EDCA ITS The EDCA ITS is TAS’s principal management authority for aligning TAS’s organizational actions and goals with customer expectations by: Proposing national and organizational performance goals, taking into account the balance of available resources and operational conditions; Determining resources required to effectively manage the quality process; Coordinating with the QRP Director, DEDCAs, and Employee Support and Development Director to meet employee training needs identified through the quality review data; Ensuring the quality results are available to ITS leadership to improve quality; and Evaluating the actions taken at all organizational levels in response to quality reports and data analyses. 13.5.1.4.1.9 (09-05-2019) Case Advocacy - Quality Roles and Responsibilities - DEDCA TAS DEDCAs are responsible for the case quality review program for their Area. The DEDCA is responsible for: Ensuring the monthly quality samples are sent from the Area’s LTA offices to QRP; Reviewing the Area’s and associated offices’ monthly and cumulative quality reports; Analyzing quality review data for the Area and LTA offices within that Area to identify trends, procedures needing improvement, training needs, systemic problems, and best practices; and Using the analytical results to improve quality in the Area’s LTA offices (e.g., share best practices, identify and provide needed training, work with the LTA offices with areas with room for improvement, etc.) 13.5.1.4.1.10 (09-22-2020) Case Quality Advocacy - Roles and Responsibilities - LTA LTAs are responsible for the quality review program in their office. The LTA is responsible for: Ensuring monthly quality samples are sent from the LTA office to QRP; Reviewing the office’s monthly and cumulative quality reports; Initiating dialogue requests to QRP when appropriate, keeping the designated area analyst informed of ongoing dialogues and elevating any disagreements according to the TAS case dialogue process; Disseminating clarifications in TAS procedures to TAS managers and case advocates in the office; Analyzing quality review data for the office to identify trends, procedures needing improvement, training needs, systemic problems, and best practices; Using the analytical results to improve quality in the LTA office (e.g., share best practices, set up training classes, work with managers and case advocates on specific improvement opportunities, etc.); and Ensuring cases that should be reopened are correctly resolved. 13.5.1.4.1.11 (09-05-2019) Case Quality Advocacy - Roles and Responsibilities - TAGM TAGMs are responsible for case quality program awareness at the group level. The TAGM is responsible for: Assisting the LTA in developing improvement initiatives, including sharing at the group level to foster employee buy-in (participation); Facilitating the development of improvement initiatives by engaging employees during group meetings and during coaching opportunities; and Targeting identified errors during case reviews as outlined in the TAS Program Letter to gauge improvement. 13.5.1.4.2 (09-05-2019) TAS Systemic Advocacy (SA) Quality Measures TAS SA quality, a business results measure, is a numerical score of the extent to which TAS SA advocacy projects (AP) and immediate interventions (II) meet the prescribed quality attributes. The attributes measure whether the project work actions correctly followed IRM guidance such as Interim Guidance Memorandums. TAS derives its SA quality from the QRP’s monthly quality reviews of the closed SA APs and IIs. Each month QRP shares with SA, the individual and cumulative quality results of APs and IIs. 13.5.1.4.2.1 (09-22-2020) Systemic Advocacy (SA) Quality Attributes QRP performs quality review on all closed APs and IIs to determine if SA worked them according to the standards and procedures. The attributes are categorized in three focus areas: Advocacy - Taking the appropriate actions to resolve taxpayer problems. Customer- Providing clear and complete responses to submitters through the use of accurate, effective, and comprehensive written and verbal contacts to ensure the submitter’s impression of TAS employees are professional, positive, knowledgeable, and competent. Procedural - Resolving submitter’s inquiries efficiently within the guidelines and timeframes prescribed and through proper workload management.. Note: A list of the individual, SA Quality Attributes can be found on the QRP’s Quality Report SharePoint site. 13.5.1.4.2.2 (09-05-2019) Sharing and Using TAS Systemic Advocacy (SA) Quality Review Results Once QRP has completed its review of APs and IIs, QRP shares the results with the Executive Director of Systemic Advocacy and his/her designated staff members. TAS uses project quality review data to provide a basis for measuring and improving program effectiveness by: Analyzing the results to identify defect trends and root cause; Developing plans to increase effectiveness in advocating for taxpayers and project processing; and Exploring best practices. Managers and designated analysts may discuss with employees the merits and issues of a particular project that was reviewed but emphasis must be on technique for advocating more effectively for taxpayers and not the quality score. The identity of the employee who worked the case should not be revealed and employees should not be asked to defend why they worked cases in a particular way. Managers should also be sensitive to whether the project was worked by a bargaining unit or non-bargaining unit employee. 13.5.1.4.2.3 (09-05-2019) TAS Systemic Advocacy (SA) Dialogue Process TAS established a dialogue process to enable SA to request a reconsideration of an error identified in QRP’s review. Results of the dialogue process may result in QRP reversing an error charged. If the error is overturned, QRP will revise the quality scores. The dialogue process is also a useful tool to identify improvement opportunity in procedural guidance, advocacy, and training. 13.5.1.4.2.3.1 (09-05-2019) TAS Systemic Advocacy Dialogue Process and Timeframes Systemic Advocacy initiates the dialogue process by contacting QRP to submit information regarding dialogued attributes. Instructions for submitting the dialogue and timeframes for the dialogue process are posted on QRP’s SharePoint site. 13.5.1.4.2.4 (09-05-2019) Systemic Advocacy Quality - Roles and Responsibilities - QRP Director The QRP Director has overall responsibility for QRP. The QRP Director oversees the following activities: Ensuring QRP reviews and documents the results of the monthly projects reviewed, posts results in SharePoint, and shares results with Systemic Advocacy; Maintaining and revising, as appropriate, the TAS quality review data collection instrument with support from Statistics of Income (SOI); Creating and maintaining monthly, quarterly and cumulative quality reports; Responding to dialogues and revising results to reflect reversed quality errors; and Providing quarterly analysis of quality result trends. 13.5.1.4.2.5 (09-05-2019) Systemic Advocacy Quality - Roles and Responsibilities - EDSA The Executive Director of Systemic Advocacy (EDSA) has overall responsibility of Systemic Advocacy, which includes processing of APs and IIs. The EDSA oversees the following quality related activities in SA: Proposing SA performance goals, taking into account the balance of available resources and operational conditions; Determining resources required in SA to effectively manage projects; Coordinating with the QRP Director to meet employee training needs identified through the quality review data; and Evaluating the actions taken in response to quality reports and data analyses. 13.5.1.5 (09-05-2019) Using Diagnostic Tools in TAS TAS uses diagnostic tools to analyze factors that influence performance and encourages dialogue about specific actions that managers may take to improve customer satisfaction, employee satisfaction, and business results. The following are examples of TAS diagnostic tools: Median closed case cycle time Mean closed case cycle time Relief granted Number of TAOs issued Closures with secondary issues IMD recommendations made to the IRS IMD recommendations accepted by the IRS Cycle time analyzed by unique segmentation Customer satisfaction survey results, such as responses to particular survey questions, improvement priorities identified, and narrative comments Employee survey results, such as responses to particular survey questions Employee experience/training/skill levels External factors (i.e, tax law, status of economy) Employee absenteeism, turnover rates Physical resources Receipts Inventory level Closure to receipt ratio Workload mix Staffing resources Cost information Regular criteria receipts (excludes reopen criteria receipts) Regular criteria ending inventory Regular criteria closures as a percentage of regular criteria receipts (excludes reopen criteria receipts) Reopen criteria receipts as a percentage of regular criteria closures Permanent staffing on rolls TAS does not use diagnostic tools to measure individual performance. TAS may establish improvement targets for diagnostic tools but only in direct support of overlying budget or operational level measures. Using diagnostic tools to compare one unit against other units may be appropriate for conducting analysis, exploring best practices, or seeking process enhancements to support improvement of the overarching balanced measure(s). Diagnostic tools include any type of data that is helpful in understanding what influences and impacts balanced measures. It is permissible to use ROTERs as diagnostic tools. Exhibit 13.5.1-1 Case Advocacy National Quality Sample Size TAS determines the national sample plan based on consultations with Statistics of Income (SOI) personnel and EDCA and secures approval from the Deputy National Taxpayer Advocate for any changes in sampling methodology. If an office’s sample size changes based on the existing approved sampling plan, QRP will notify those offices impacted. The national random sample is divided or stratified among individual offices at the LTA level. Stratifying the random sample by individual office improves the statistical accuracy of the quality estimate for each office because the variation in quality within an individual office is likely to be lower than the variation in quality between individual offices. The monthly sample size in each office is based on the number of randomly sampled cases necessary to provide a statistically valid estimate of case quality at the LTA level by the end of the fiscal year. TAS uses sample sizes that achieve a minimum confidence level of 90 percent in the quality estimate with a maximum margin of error, or precision margin, of 5 percent above or below the quality estimate. TAS may establish annual sampling plans that review more cases than necessary to achieve 90 percent confidence and 5 percent precision in order to achieve other organizational goals, such as trend analysis or targeted program analysis. However, TAS will not sample less than the required minimum number of cases to achieve 90 percent confidence with 5 percent precision by the end of the fiscal year in each office. Monthly sample size in each office is determined at the beginning of each fiscal year. In the interest of administrative convenience, monthly sample size in each office generally will not vary during a fiscal year. However, monthly sample sizes may vary between offices based on technical advice from SOI, but the monthly sample size for each office will generally remain the same throughout the fiscal year. SOI uses a method of calculating confidence levels and precision margins called the Standard Score, or z-Score Distribution Method. Using this method, SOI can make statistically valid estimates of quality, confidence level, and precision margin at any organizational level once a random sample of 40 or more cases has been reviewed by QRP at that organizational level. The organizational level can be national, area, or office level. The z-Score Distribution Method requires a minimum of 40 random case reviews because variation in the distribution of sampling estimates starts to resemble a standard normal, bell-shaped curve when 40 or more randomly sampled cases are available. Once the distribution of estimates starts to resemble a normal, bell-shaped curve, statisticians can assign confidence levels and precision margins to the quality estimate at the organizational level based on the known properties of a normal, bell-shaped curve. Monthly sample size determines when 40 or more case samples are available at each organizational level. At the National level, 40 or more cases are available during the first month of each fiscal year, so SOI can estimate quality with 90 percent confidence and 5 percent precision starting in October each year. Similarly, SOI can compute statistically valid estimates for area offices in the first month of each year if the total number of samples taken from offices within the area consist of 40 or more cases during the first month of the fiscal year. In contrast, LTA offices do not achieve the cumulative sample sizes of 40 cases or more in the first month of the fiscal year. Therefore, SOI cannot compute statistically valid estimates with precision margins for LTA offices until later in the fiscal year when the cumulative sample sizes has reached 40 or more cases per LTA office. SOI “weights” TAS quality results by the total number of cases closed in an office during a month. Weighting is necessary because TAS samples a fixed number of cases in each office per month, but the total number of cases closed in each office varies every month. Therefore, each case in the sample actually represents a certain number of cases that were closed during the month but were not included in the random sample. Weighting adjusts the quality estimate to account for the cases that were not included in the sample during the month. More Internal Revenue Manual