Access Playwright Trace Viewer & Reports Online via Amazon S3
Running Playwright tests on a CI platform is a great way to ensure the reliability of your web application. However, managing the generated test reports and trace files can be a bit cumbersome. Traditionally, team members have to download these artifacts and analyse them locally, which could be time-consuming and inefficient. Additionally, sharing the findings with the team posed its own set of challenges.
In this blog post, I’ll show you how to set up Playwright tests on GitHub Actions to generate test reports and trace files for convenient one-click online access by storing them on Amazon S3 , providing a more efficient way to analyze and share the results with the team.
Setting up Playwright Tests and configuration
The first step is to set up Playwright tests for the application. The AUT here is https://todolist.james.am/#/
I created 3 Playwright tests in 2 spec files.
todo-passing.spec.ts : has 1 passing test.
todo-failing.spec.ts : has 1 passing and 1 failing test.
In the playwright.config.ts file, I set:
trace: 'retain-on-failure'
reporter: 'html'
fullyParallel: true,
With this set up, all tests are run in parallel, a html report is generated after test execution, and if there is a failing test, a trace file is generated and attached to the report.
Setting up AWS and S3.
To store and retrieve the report from S3, we need to set up an IAM role and create a bucket.
Creating the IAM role:
I followed the below AWS guide to do this. We need to carry out steps 1–3.
Setting up the S3 bucket:
- Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/
- Create a new bucket with the below settings:
3. Open the created bucket and goto Permission tab to add the following policy.
{
"Version": "2012–10–17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<add your bucket name here>/*"
}
]
}
Remember to replace <add your bucket name here> with the name of your bucket.
4. Finally, we need to enable Static Website hosting so that the reports are served as a static intractable website. To do this, open the bucket, open properties, open Static Website Hosting setting and add the details:
GitHub Secrets and Variables
Next, we need to add a few GitHub secrets and variables to use in our YAML file.
Copy the role name by opening the IAM role you created earlier. Under summary section you will see a filed named ARN.
Open the bucket and copy the bucket name and region name. For region name, copy only the second part. For example, if region is Europe (London) eu-west-2, we only need eu-west-2.
Navigate to /settings/secrets/actions section of your repository and add the copied values with name as shown below.
#Secrets
AWS_ROLE_ARN : arn:aws:iam::......
#Variables
AWS_REGION : playwright-trace
AWS_S3_BUCKET: eu-west-2
Integrating with GitHub Actions
I wanted the tests to run whenever is a code change is pushed. Here’s how I set up the GitHub Actions workflow:
# GitHub Actions workflow file (playwright.yml)
name: Playwright Tests
"on":
push:
branches:
- main
permissions:
id-token: write
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install dependencies
run: npm ci
- name: Install Chromium
run: npx playwright install --with-deps chromium
- name: Run Playwright tests
run: npx playwright test
- uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: playwright-report/
retention-days: 1
#...
In this workflow, we specified that the tests should run on push
events to the main
branch. It installs the dependencies, sets up Playwright, runs the tests, and uploads the test reports as artifacts.
Storing Test Reports on Amazon S3
We need to upload the results to S3 Bucket next. For this we can use the aws-actions/configure-aws-credentials
action to configure AWS credentials.
# Configure credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
if: always()
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ vars.AWS_REGION }}
Playwright reports are by default stored under playwright-report/ folder at the root. Trace files are also present inside this folder. We can copy this folder to S3 bucket using the command:
aws s3 cp playwright-report/. s3://<bucket name>
Since we will be running our tests multiple times, we need a unique identifier to differentiate reports generated from each run. For this, we can append github.run_id to the folder name.
So the folder in the bucket will using the naming format:
playwright-report-${{ github.run_id }}
With this, the upload step would look like this:
# AWS S3 Upload Step
- name: Upload to S3 bucket
id: S3
if: always()
env:
REPORT_DIR: playwright-report-${{ github.run_id }}
run: |
echo "REPORT_DIR=$REPORT_DIR" >> $GITHUB_ENV
aws s3 cp playwright-report/. s3://${{ vars.AWS_S3_BUCKET }}/$REPORT_DIR --recursive
We created a unique directory for each workflow run, copied the playwright report to this directory ensuring that reports from different runs wouldn’t overwrite each other.
Each directory contains the index.html (report) and trace files:
Generating Accessible URLs
To make the test reports easily accessible via URLs, we need complete a few more steps.
If you look at the reporting folder structure of Playwright, you can see the index.html file (which is the actual report) is under playwright-report folder.
In S3, index.html file would be under
playwright-report-${{ github.run_id }} / index.html
So we can access it with a URL that looks like:
https://playwright-trace.s3.eu-west-2.amazonaws.com/playwright-report-7667223182/index.html
Where 7667223182 is the GitHub run id.
With that information, I can now create a dynamic URL as:
# Create URL
- name: Create URL file
if: always()
run: |
REPORT_URL="https://${{ vars.AWS_S3_BUCKET }}.s3.${{ vars.AWS_REGION }}.amazonaws.com/${{ env.REPORT_DIR }}/index.html"
echo $REPORT_URL > url.txt
echo "Report URL: $REPORT_URL"
# Generate URL Step
- name: Create URL file
if: always()
run: |
REPORT_URL="https://${{ vars.AWS_S3_BUCKET }}.s3.${{ vars.AWS_REGION }}.amazonaws.com/${{ env.REPORT_DIR }}/index.html"
echo $REPORT_URL > url.txt
echo "Report URL: $REPORT_URL"
This step creates a URL pointing to the specific test report, and saves it as $REPORT_URL for future reference.
Displaying the URL in GitHub Actions
Finally, we want to display the report URL directly on the GitHub actions summary page like below:
Upon clicking View Playwright Report hyperlink , the report from S3 should be shown to the user. We can achieve this by adding a step that uses GitHub annotations:
# Display URL Step
- name: Setup Job Summary
if: always()
run: |
REPORT_URL="https://${{ vars.AWS_S3_BUCKET }}.s3.${{ vars.AWS_REGION }}.amazonaws.com/${{ env.REPORT_DIR }}/index.html"
echo " 🔗 [View Playwright Report]($REPORT_URL)" >> $GITHUB_STEP_SUMMARY
Now, whenever a workflow run is completed, you can simply click on the “View Playwright Report” hyperlink in the GitHub Actions job summary to access the test report.
The final YAML would look like this:
name: Playwright Tests
"on":
push:
branches:
- main
permissions:
id-token: write
contents: read
jobs:
test:
timeout-minutes: 10
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install dependencies
run: npm ci
- name: Install Chromium
run: npx playwright install --with-deps chromium
- name: Run Playwright tests
run: npx playwright test
- uses: actions/upload-artifact@v4
if: always()
with:
name: playwright-report
path: playwright-report/
retention-days: 1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
if: always()
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: ${{ vars.AWS_REGION }}
- name: Upload to S3 bucket
id: S3
if: always()
env:
REPORT_DIR: playwright-report-${{ github.run_id }}
run: |
echo "REPORT_DIR=$REPORT_DIR" >> $GITHUB_ENV
aws s3 cp playwright-report/. s3://${{ vars.AWS_S3_BUCKET }}/$REPORT_DIR --recursive
- name: Create URL file
if: always()
run: |
REPORT_URL="https://${{ vars.AWS_S3_BUCKET }}.s3.${{ vars.AWS_REGION }}.amazonaws.com/${{ env.REPORT_DIR }}/index.html"
echo $REPORT_URL > url.txt
echo "Report URL: $REPORT_URL"
- name: Setup Job Summary
if: always()
run: |
REPORT_URL="https://${{ vars.AWS_S3_BUCKET }}.s3.${{ vars.AWS_REGION }}.amazonaws.com/${{ env.REPORT_DIR }}/index.html"
echo " 🔗 [View Playwright Report]($REPORT_URL)" >> $GITHUB_STEP_SUMMARY
Project GitHub link: https://github.com/afsal-backer/Playwright-Online-TraceViewer
Security Concern and Mitigations
While setting up Playwright tests and storing the test reports on Amazon S3 provides convenient access, it also raises some security concerns.
In the current setup, the S3 bucket is configured to allow public read access to all objects. This means anyone with the URL can access the reports, which may not be desirable.
Some of the mitigation strategies we could use:
- Pre-signed URLs: Instead of allowing public access, you can use presigned URLs to grant temporary, time-limited access to specific objects. This way, the reports remain private unless explicitly shared.
- No Network Path Restriction: You can restrict access to the S3 bucket to specific IP ranges or VPCs, adding an extra layer of security.
- Amazon CloudFront: If you don’t want to enable public access settings for your bucket but you still want your website to be public, you can create a Amazon CloudFront distribution to serve your static website. This not only provides a secure and friendlier URL but also adds an additional caching layer for better performance.
- Additional Considerations:
- Review Periodically: It’s a good practice to periodically review both your GitHub artifact and S3 object expiration settings to ensure they continue to meet your project’s needs.
- Cost Implications: Storing large amounts of data in S3, even temporarily, can incur costs. Aligning expiration policies can help manage these costs effectively.
GitHub Pages vs. S3
While GitHub Pages can host static websites, it may not be the best choice for storing sensitive test reports. With GitHub Pages, you have less control over access restrictions and security.
Advantages of Amazon S3:
- Fine-grained access control using IAM and presigned URLs.
- Network path restrictions can be applied.
- Integration with CloudFront for performance and security.
- Better suited for storing and serving private content.
Conclusion
In this article, we explored the process of setting up Playwright tests, integrating them with GitHub Actions, and storing the test reports on Amazon S3 for easy online access. This approach provides a streamlined way to execute tests automatically whenever code changes are pushed to the repository and store the results securely.
While this setup offers convenience, it’s important to consider security implications. We discussed potential security issues and how to address these concerns.