CI & Automation Basics CI Logs and Troubleshooting

Learning objective: Interpret CI logs and test failures to troubleshoot issues effectively.

GitHub Actions logs

When you push code or open a pull request, GitHub Actions runs your defined workflow and creates detailed logs for every step. Knowing how to read and navigate these logs is essential for swiftly finding and fixing problems.

💡 Think of log output as a conversation between your code and your CI system. Learning to “listen” to this conversation helps you find and resolve issues much faster.

Example workflow log section:

Run pytest
============================= test session starts ==============================
platform linux -- Python 3.11.0, pytest-7.3.1, pluggy-1.0.0
collected 3 items

tests/test_api.py ..F                                                [100%]

=================================== FAILURES ===================================
_________________________ test_create_user_duplicate ____________________________

    def test_create_user_duplicate():
        # ... test logic ...
>       assert response.status_code == 400
E       AssertionError: assert 422 == 400
E        +  where 422 = <Response [422]>.status_code

tests/test_api.py:20: AssertionError

Here you can see:

Common CI failure scenarios and their indicators

Not all failures are the same. Here are some typical reasons a CI pipeline may break, along with how to identify them by their log output.

1. Failed dependency installation

Example:

ERROR: Could not find a version that satisfies the requirement fastapi==9.9.9 (from versions: 0.1.8, ...)
ERROR: No matching distribution found for fastapi==9.9.9

⚠ Double check your requirements.txt and the availability of versions before troubleshooting further.

2. Test failures

Example:

_________________________ test_login_valid_user _________________________
E       AssertionError: assert 404 == 200

💡 Comparing expected and actual values can help you narrow down whether the test expectation or your implementation needs to be reviewed.

3. Environment issues or missing secrets

4. Service unavailability (for integration/Selenium tests)

Strategies for debugging failed CI runs

Solving CI failures is like piecing together a mystery: follow clues in the logs, and check each aspect methodically.

1. Start with the failed step

2. Read error messages closely

3. Compare CI and local environments

Ask yourself:

💡 Even small differences in dependencies can cause surprising failures that only appear in CI.

4. Use verbose output and print statements

5. Make one change at a time

♻️ If a step intermittently fails or reports network errors, try rerunning—sometimes, transient CI issues are outside your control.

Using GitHub Actions annotations for clearer error reporting

GitHub Actions provides annotations—visual highlights (like error or warning icons) that bring attention to the most important problems:

You can also create custom annotations for more direct or team-specific guidance.

echo "::error file=tests/test_api.py,line=20::Expected status 400 but got 422"

This will generate a clickable error message at the correct file and line in the GitHub UI.

🏆 Effective use of annotations saves time during code reviews and helps collaborators focus directly on problem areas.

Troubleshooting test failures

5 min

Troubleshooting test failures often means piecing together why something broke, not just what failed.

How do you currently use CI logs to diagnose issues in your development workflow? What new strategies from this lesson can you use to improve your troubleshooting speed and accuracy? In what ways could clearer log messages or custom error annotations benefit your team as your project grows?

Knowledge checks

❓ When reviewing a failed CI run in GitHub Actions, what is typically the most effective first step?

❓ Which of the following best describes the benefit of using custom annotations in your CI workflow?