Learn best practices for obtaining reliable audit evidence, including log sampling, configuration reviews, and re-performance of controls, to strengthen your SOC engagement.
Effective evidence gathering is at the heart of any successful System and Organization Controls (SOC) engagement. Whether you are conducting a SOC 1®, SOC 2®, SOC 3®, or SOC for Cybersecurity examination, the ultimate objective is to form an opinion on the design and operating effectiveness of the service organization’s controls. This requires obtaining sufficient, appropriate evidence to reduce the risk of issuing an incorrect opinion to an acceptably low level. In this section, we explore the types of evidence, techniques, and tools that can be used to gather it, and best practices for evaluating evidence reliability. We also provide real-world examples and practical advice that can be applied to your engagements immediately.
Beyond the specific guidance in this chapter, you may find it helpful to recall overarching standards and frameworks discussed in earlier chapters, such as COSO Internal Control – Integrated Framework (Chapter 3.1) and COBIT 2019 (Chapter 3.3), which provide valuable context for understanding control governance. Keep in mind that the rigorous nature of SOC engagements demands consistent, systematic, and well-documented evidence collection—an important part of the planning and performance stage (see Chapter 25).
In a SOC engagement, evidence refers to any information used by the service auditor to support or contradict management’s assertions. Commonly, you will evaluate evidence related to the design of controls, their operating effectiveness, or both, depending on whether the engagement is Type 1 (design only) or Type 2 (design and operating effectiveness). Each piece of evidence you collect must meet two overarching criteria:
• Sufficient: The quantity of evidence must be enough to form a reasonable basis for your opinion.
• Appropriate: The evidence must be relevant and reliable in supporting or refuting the control objectives or Trust Services Criteria (for SOC 2®) under examination.
The evidence typically takes the form of:
Combining these methods is crucial for ensuring evidence sufficiency. For example, an auditor reviewing user access controls might inspect system logs, interview the system owner, observe the password reset process, and re-perform an access provisioning exercise. Each strand of evidence bolsters the overall conclusion.
Selecting the right tools can streamline data collection and help detect anomalies and deficiencies more efficiently. Below is an overview of widely used tools that can assist in gathering valid and reliable evidence during a SOC engagement:
• Purpose: Examine application, database, or network logs for unusual activity, control exceptions, or patterns of normal operation.
• Examples: Splunk, Graylog, LogRhythm, and open-source solutions such as the ELK Stack (Elasticsearch, Logstash, Kibana).
• Benefits: Quick identification of suspicious activities, correlation of log events across multiple systems, and centralized management for large volumes of data.
• Purpose: Capture and compare system, application, or device configurations against baseline settings or prior states.
• Examples: Chef, Puppet, Ansible, Microsoft Sysinternals, and built-in operating system utilities.
• Benefits: Facilitates tracking of unauthorized or accidental configuration changes, captures system states for subsequent re-performance, and helps ensure alignment with baseline security benchmarks or compliance standards.
• Purpose: Provide centralized repositories for policies, risks, controls, and compliance requirements. Automate workflows for issue tracking and remediation.
• Examples: ServiceNow GRC, MetricStream, RSA Archer, and other integrated risk management software.
• Benefits: Consolidated environment for documenting test procedures, tracking remediation, capturing evidence, assigning responsibilities, and producing final SOC reporting deliverables.
• Purpose: Conduct advanced analytics, identify patterns or anomalies in financial and operational data, and facilitate sampling approaches.
• Examples: IDEA, ACL (Galvanize), Alteryx, or standard audit sampling modules in broad analytics platforms.
• Benefits: Enables large-scale testing of entire populations or risk-based sampling, potential reduction in human error, and faster detection of outliers or exceptions.
• Purpose: Manage deadlines, tasks, and communication across geographically dispersed engagement teams and client personnel.
• Examples: Microsoft Teams, Trello, Asana, or custom portals for document exchange (e.g., Box, SharePoint).
• Benefits: Real-time collaboration, streamlined evidence requests, improved version control, more efficient data collection, and transparent progress tracking.
Several testing techniques are available for SOC engagements, and each technique fits different objectives or control types. Effective engagements often rely on a combination of methods to maximize the reliability of conclusions.
Inspection is one of the most common and straightforward auditing procedures. You validate the existence and appropriateness of a control by reviewing formal documents including policies, procedures, system logs, configuration files, and records of transactions processed during the period under review.
• Example: For user access reviews, you might inspect lists of disabled or terminated user accounts, compare them to the most recent HR termination list, and verify timely revocation of credentials.
• Considerations: Ensure the documentation is complete and up-to-date. Cross-check system timestamps, version histories, or signature blocks to confirm authenticity.
Observation involves watching a control or process being performed. In IT environments, this may include seeing how employees handle data backups, monitor performance dashboards, or how help-desk employees verify user identities before password resets.
• Example: Observe how IT operators perform daily server health checks and note any anomalies or exceptions.
• Considerations: Observation captures real-world control execution, but it is moment-in-time. Therefore, it might need to be supplemented by subsequent inquiries or inspection of logs to confirm consistent operation over the entire audit period.
Inquiry uses interviews, questionnaires, or direct communication with knowledgeable personnel to better understand processes, control environment, or undocumented aspects of an organization’s control activities. While inquiry can yield valuable context, it should rarely stand alone as the sole evidence source.
• Example: Interview the network administrator to clarify the nature of certain firewall rules or to confirm patch management workflows.
• Considerations: Corroborate results from inquiries with at least one other form of evidence—inspection of network device configurations, logs of patch deployments, or direct observation—to rule out misunderstandings or unintentional misrepresentations.
Re-performance is a powerful technique where the auditor independently executes the same procedures performed by the organization’s personnel to confirm accuracy and completeness. This is widely used for evaluating transaction processes within an application, verifying configurations, or testing calculations in financial statements.
• Example:
• Considerations: Require the same dataset or scenario to ensure results can be compared one-to-one. Ensure you obtain read-only credentials or a controlled test environment to avoid inadvertently changing production data.
Analytical procedures include ratio analysis, trend identification, or predictive modeling. They are especially useful where large volumes of data are involved, such as high-frequency transactions. This may be supplemented with advanced data analytics techniques—time-series analyses, outlier detection, or machine learning–based clustering.
• Example: Compare monthly revenue to the prior year, adjusted for known business changes, to assess reasonableness. Outliers could signal unauthorized adjustments or system errors.
• Considerations: While powerful, these methods require careful selection of data inputs and assumptions. The results are typically more instructive as indicators of possible issues rather than conclusive proof of control effectiveness.
SOC engagements often cover large volumes of transactions or massive log files. Reviewing all data may be impractical, so sampling becomes a critical part of evidence gathering. Effective sampling strategies can help reduce engagement costs and time while retaining reliability.
Before sampling can begin, clearly define the population. For example:
• All user access changes during the review period
• All firewall configuration changes
• All Journal Entry (JE) postings above a specific threshold
Sample size depends on factors such as desired confidence level, acceptable risk of errors, and the complexity of the population. A simplified (though not the only) formula for sample size determination is:
Where:
• n is the sample size.
• Z is the Z-score for the desired confidence level.
• p is your estimated or expected proportion of deviations in the population.
• e is the acceptable sampling error (precision).
For example, if you need 95% confidence (Z ≈ 1.96), expect 5% deviations (p = 0.05), and can tolerate 3% margin of error (e = 0.03), you could approximate the required sample size for a homogeneous population. In practice, SOC engagements often use more nuanced statistical or non-statistical methods—both can be acceptable if properly justified.
You might employ random, systematic, or judgmental sampling based on risk. For instance, if certain high-risk transactions or log types are more likely to reveal control deficiencies, a targeted sample approach might be warranted. Once selected:
Re‑performance of system configurations can be one of the most reliable ways to validate IT general controls (ITGCs), which aligns with Chapter 8 (IT General Controls) discussion. By manually or automatically checking the same configurations tested by the organization, you reduce reliance on an organization’s internal documentation alone.
Example: Testing a Firewall Configuration
Example: Testing a Database Configuration
Considerations:
Just as a geodesist uses triangulation for precise mapping, a service auditor should integrate multiple evidence sources to form a robust conclusion. Two or more types of evidence pointing to the same conclusion greatly enhance confidence.
Suppose you want to verify that termination procedures are functioning effectively within the HR and IT systems:
If all four evidence sources align, it strongly supports the conclusion. However, if you find conflict—for example, the logs show delayed account revocation or the help desk overlooked a termination request—further investigation is warranted.
To illustrate these concepts, consider a service organization using a popular ERP system. One of their control objectives is that all program changes are authorized, tested, and approved in a dedicated test environment prior to migration to production.
Key Steps in Gathering Evidence:
Inquiry and Documentation Review
Sampling Change Tickets
Observation
Re-performance
Triangulation
Below is a Mermaid diagram summarizing the general flow of evidence gathering in a SOC engagement:
flowchart LR A["Plan <br/>Scope & Objectives"] --> B["Identify <br/>Key Controls"] B --> C["Choose <br/>Evidence Techniques"] C --> D["Collect <br/>Evidence"] D --> E["Evaluate & <br/>Corroborate"] E --> F["Conclude <br/>(SOC Opinion)"]
• A → B: Define engagement objectives and scope to determine which controls are in play.
• B → C: Select testing strategies tailored to the controls’ nature (e.g., re-performance, inspection).
• C → D: Execute the chosen procedures, using tools and best practices to gather evidence.
• D → E: Analyze findings, investigate exceptions, and cross-validate with multiple sources.
• E → F: Form conclusions on control effectiveness and craft your SOC report.
Overreliance on Inquiry
Inadequate Sampling
Inconsistent Documentation
Ignoring System-Generated Evidence Integrity
Excessive Trust in Third-Party Attestations
• Maintain Continuous Communication: Keep open dialogues with key client personnel, especially when dealing with complex configurations or specialized software.
• Use Audit Trails and Automated Tools: Automate as much as possible (e.g., log analysis, system configuration checks) to reduce manual effort and errors.
• Adopt a Risk-Based Approach: Address high-impact, high-likelihood risk areas more extensively, while using standard sampling or observation for lower-risk controls.
• Validate System Reliability: Confirm that the system producing your evidence is well-controlled, which underpins reliable evidence. See Chapter 8 on ITGC for guidance.
• Correlate Multiple Evidence Sources: Strengthen conclusions by cross-verifying findings. Contradictory evidence suggests the need for deeper investigation.
• AICPA Guide: SOC 1® and SOC 2® Examinations – In-depth guidance on evaluating evidence and testing controls.
• COBIT 2019 by ISACA – Provides governance and management guidance for ensuring relevant, reliable, and adequate information.
• NIST SP 800-53 – Outlines controls for federal information systems, offering insights into best practices for security and evidence collection.
• ISACA Journal – Frequent articles on IT auditing methodologies and emerging technologies for evidence gathering.
These resources reinforce how to integrate frameworks effectively, identify critical controls, and adopt a comprehensive approach to gather and evaluate evidence in SOC engagements.
Information Systems and Controls (ISC) CPA Mocks: 6 Full (1,500 Qs), Harder Than Real! In-Depth & Clear. Crush With Confidence!
Disclaimer: This course is not endorsed by or affiliated with the AICPA, NASBA, or any official CPA Examination authority. All content is for educational and preparatory purposes only.