If performance and SEO is a fundamental aspect of your website, you should be monitoring it. Playwright & Lighthouse integration is an excellent way to start.
This setup allows you to run performance audits as part of your regular test suite, catching performance regressions before they reach production.
Why Combine Playwright with Lighthouse?
Lighthouse is Google's open-source tool for improving web page quality. It audits performance, accessibility, SEO, and best practices. However, running Lighthouse manually or through basic CI scripts doesn't integrate well with your existing test infrastructure. Playwright, on the other hand, is excellent for browser automation but doesn't have built-in performance auditing capabilities. By combining them, you get:
- Automated performance regression testing
- Integration with your existing Playwright test suite
- Consistent browser environments across tests
- Detailed performance metrics and reports
- CI/CD pipeline integration
Setting Up the Integration
Let's start by installing the required dependencies. You'll need Playwright and the playwright-lighthouse
integration library:
npm install --save-dev @playwright/test playwright playwright-lighthouse get-port
The get-port
package is crucial for managing ports dynamically, ensuring tests don't conflict with each other.
Creating the Lighthouse Test Fixture
The key to this integration is setting up a proper Playwright fixture that manages the Chrome debugging port. Here's an example setup:
// tests/performance/lighthouse.ts
import { chromium, Browser } from 'playwright'
import { test as base } from '@playwright/test'
const getPortPromise = import('get-port')
type LighthouseFixtures = {
port: number
browser: Browser
}
export const lighthouseTest = base.extend<{}, LighthouseFixtures>({
port: [
async ({}, use) => {
const { default: getPort } = await getPortPromise
const port = await getPort()
await use(port)
},
{ scope: 'worker' },
],
browser: [
async ({ port }, use) => {
const browser = await chromium.launch({
args: [`--remote-debugging-port=${port}`],
})
await use(browser)
},
{ scope: 'worker' },
],
})
export const test = lighthouseTest
What's happening here?
- Dynamic Port Allocation:
get-port
ensures each test worker gets a unique debugging port, preventing conflicts in parallel execution - Worker Scope: Both fixtures use
worker
scope, meaning they're shared across tests in the same worker process - Chrome Remote Debugging: The browser launches with
--remote-debugging-port
, which Lighthouse needs to connect and run audits
Now for the actual tests measurement
Now let's create actual performance tests. The beauty of this approach is that you can test multiple pages with different performance thresholds:
// tests/performance/lighthouse.spec.ts
import { test } from './lighthouse'
const thresholds = {
accessibility: 90,
'best-practices': 90,
seo: 90,
}
const urlsToTest = [
{ name: 'Homepage', uri: '/' },
{ name: 'About Page', uri: '/about' },
{ name: 'Blogs Page', uri: '/blogs' },
]
test.describe('Lighthouse Audits', () => {
for (const pageInfo of urlsToTest) {
test(`Check Lighthouse scores for ${pageInfo.name}`, async ({
page,
port,
}) => {
await page.goto(`http://someurl.com`)
const { playAudit } = await import('playwright-lighthouse')
await playAudit({
page,
port,
thresholds,
})
})
}
})
Few things to note here - You do have a clear method for flexible thresholds, then you can set different performance standards for different pages, which can be crucial for dynamic websites which various different pages and content. Also, in this way you can easily add new pages.
When you run these tests, Lighthouse evaluates several key areas which you should be familiar with:
Performance Metrics
- First Contentful Paint (FCP): When the first text/image appears
- Largest Contentful Paint (LCP): When the main content is likely loaded
- Speed Index: How quickly content is visually displayed
- Cumulative Layout Shift (CLS): Visual stability measure
Other Audits
- Accessibility: Screen reader compatibility, color contrast, ARIA labels
- Best Practices: HTTPS usage, console errors, deprecated APIs
- SEO: Meta tags, structured data, "crawlability"
Advanced Configuration Options
You can customize the Lighthouse configuration for more specific testing needs:
const advancedThresholds = {
performance: 85,
accessibility: 95,
'best-practices': 90,
seo: 90,
pwa: 80,
}
const lighthouseConfig = {
extends: 'lighthouse:default',
settings: {
onlyCategories: ['performance', 'accessibility', 'best-practices', 'seo'],
skipAudits: ['uses-http2'], // Skip specific audits if needed
},
}
// In your test
await playAudit({
page,
port,
thresholds: advancedThresholds,
config: lighthouseConfig,
reports: {
formats: {
html: true,
json: true,
},
name: `lighthouse-report-${pageInfo.name}`,
directory: './lighthouse-reports',
},
})
Integration with Playwright Config
Your playwright.config.ts
should include a dedicated performance testing project:
// playwright.config.ts
import { defineConfig, devices } from '@playwright/test'
export default defineConfig({
testDir: './tests',
projects: [
// ... other projects
{
name: 'Performance',
fullyParallel: true,
testDir: './tests/performance',
use: {
headless: true, // Performance tests should run headless
},
},
],
})
Running the Tests
Execute your performance tests with a dedicated npm script:
{
"scripts": {
"test:performance": "npx playwright test --project=Performance"
}
}
Before running the tests, make sure your development server is running:
# Terminal 1: Start your app
npm run dev
# Terminal 2: Run performance tests
npm run test:performance
CI/CD Integration
For continuous integration, you'll want to build and serve your application statically. Here we're using a simple GitHub Actions workflow to run the tests.
# .github/workflows/performance.yml
name: Performance Tests
on: [push, pull_request]
jobs:
performance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '22'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build application
run: npm run build
- name: Start server and run tests
run: |
npm start &
npx wait-on http://localhost:3000
npm run test:performance
Best Practices and Tips
1. Set Realistic Thresholds
Don't start with perfect scores (100/100). Begin with current performance levels and gradually improve:
const realisticThresholds = {
performance: 70, // Start here, aim for 90+
accessibility: 85, // Should be high from the start
'best-practices': 80,
seo: 75,
}
2. Test Multiple Device Types
Lighthouse can simulate different devices:
await playAudit({
page,
port,
thresholds,
config: {
extends: 'lighthouse:default',
settings: {
emulatedFormFactor: 'mobile',
throttling: {
rttMs: 150,
throughputKbps: 1600,
cpuSlowdownMultiplier: 4,
},
},
},
})
3. Handle Dynamic Content
For pages with dynamic content, ensure they're fully loaded before auditing:
test('Performance test with dynamic content', async ({ page, port }) => {
await page.goto('http://localhost:3000/dashboard')
// Wait for dynamic content to load
await page.waitForSelector('[data-testid="dashboard-content"]')
await page.waitForLoadState('networkidle')
const { playAudit } = await import('playwright-lighthouse')
await playAudit({ page, port, thresholds })
})
4. Monitor Performance Over Time
Store results and track performance trends:
const results = await playAudit({
page,
port,
thresholds: {}, // Don't fail on thresholds, just collect data
reports: {
formats: { json: true },
directory: './performance-history',
},
})
// Store results in your preferred monitoring system
await storePerformanceMetrics(results, pageInfo.name)
Troubleshooting Common Issues
Port Conflicts
If you encounter port conflicts, ensure you're using get-port
correctly and that tests aren't running in shared browser contexts.
Timeout Errors
Increase Playwright's timeout for performance tests:
test.setTimeout(60000) // 60 seconds for performance tests
Inconsistent Results
Performance can vary between runs. Consider:
- Running tests multiple times and averaging results
- Using consistent hardware in CI
- Disabling CPU throttling for development testing
That's it. Would love to hear your thoughts and feedback.