Configuration File

TestCafe uses the .testcaferc.json configuration file to store its settings.

Settings you specify when you run TestCafe from the command line and programming interfaces override settings from .testcaferc.json. TestCafe prints information about every overridden property in the console.

Keep .testcaferc.json in a directory from which you run TestCafe. Most often, this is the project's root directory. TestCafe does not take into account configuration files located in other directories (for instance, project's subdirectories).

A configuration file can include the following settings:

The configuration file supports JSON5 syntax. This allows you to use JavaScript identifiers as object keys, single-quoted strings, comments and other JSON5 features.

You can find a complete configuration file example in our GitHub repository.

browsers

Specifies one or several browsers in which test should be run.

You can use browser aliases to specify locally installed browsers.

{
    "browsers": "chrome"
}
{
    "browsers": ["ie", "firefox"]
}

Use the all alias to run tests in all the installed browsers.

To specify a browser by the path to its executable, use the path: prefix. Enclose the path in backticks if it contains spaces.

{
    "browsers": "path:`C:\\Program Files\\Internet Explorer\\iexplore.exe`"
}

Alternatively, you can pass an object whose path property specifies the path to the browser executable. In this case, you can also provide an optional cmd property that contains command line parameters passed to the browser.

{
    "browsers": {
        "path": "/home/user/portable/firefox.app",
        "cmd": "--no-remote"
    }
}

To run tests in cloud browsers or other browsers accessed through a browser provider plugin, specify the browser's alias that consists of the {browser-provider-name} prefix and the name of a browser (the latter can be omitted); for example, saucelabs:Chrome@52.0:Windows 8.1.

{
    "browsers": "saucelabs:Chrome@52.0:Windows 8.1"
}

To run tests in a browser on a remote device, specify remote as a browser alias.

If you want to connect multiple browsers, specify remote: and the number of browsers. For example, if you need to use four remote browsers, specify remote:4.

{
    "browsers": "remote:4"
}

You can add postfixes to browser aliases to run tests in the headless mode, use Chrome device emulation or user profiles.

{
    "browsers": ["firefox:headless", "chrome:emulation:device=iphone X"]
}

You cannot add postfixes when you use the path: prefix or pass a { path, cmd } object.

CLI: Browser List
API: runner.browsers, BrowserConnection

src

Specifies files or directories from which to run tests.

{
    "src": "/home/user/tests/fixture.js"
}
{
    "src": ["/home/user/auth-tests/fixture-1.js", "/home/user/mobile-tests/"]
}

You can use globbing patterns to specify a set of files.

{
    "src": ["/home/user/tests/**/*.js", "!/home/user/tests/foo.js"]
}

CLI: File Path/Glob Pattern
API: runner.src

reporter

Specifies the name of a built-in or custom reporter that should generate test reports.

{
    "reporter": "list"
}

This configuration outputs the test report to stdout. To save a report to a file, pass an object whose name property specifies the reporter name and output property specifies the path to the file.

{
    "reporter": {
        "name": "xunit",
        "output": "reports/report.xml"
    }
}

You can use multiple reporters, but note that only one reporter can write to stdout. All other reporters must output to files.

{
    "reporter": [
        {
            "name": "spec"
        },
        {
            "name": "json",
            "output": "reports/report.json"
        }
    ]
}

CLI: -r, --reporter
API: runner.reporter

screenshotPath

Enables screenshots and specifies the base directory where they are saved.

{
    "screenshotPath": "/home/user/tests/screenshots/"
}

See Screenshots for details.

CLI: -s, --screenshots
API: runner.screenshots

takeScreenshotsOnFails

Specifies that a screenshot should be taken whenever a test fails.

{
    "takeScreenshotsOnFails": true
}

Screenshots are saved to the directory specified in the screenshotPath option.

CLI: -S, --screenshots-on-fails
API: runner.screenshots

screenshotPathPattern

Specifies a custom pattern to compose screenshot files' relative path and name.

{
    "screenshotPathPattern": "${DATE}_${TIME}/test-${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.png"
}

See Path Pattern Placeholders for information about the available placeholders.

Use the screenshotPath option to enable screenshots.

CLI: -p, --screenshot-path-pattern
API: runner.screenshots

videoPath

Enables TestCafe to record videos of test runs and specifies the base directory to save these videos.

{
    "videoPath": "reports/screen-captures"
}

See Record Videos for details.

CLI: --video
API: runner.video

videoOptions

Specifies options that define how TestCafe records videos of test runs.

{
    "videoOptions": {
        "singleFile": true,
        "failedOnly": true,
        "pathPattern": "${TEST_INDEX}/${USERAGENT}/${FILE_INDEX}.mp4"
    }
}

See Basic Video Options for the available options.

Use the videoPath option to enable video recording.

CLI: --video-options
API: runner.video

videoEncodingOptions

Specifies video encoding options.

{
    "videoEncodingOptions": {
        "r": 20,
        "aspect": "4:3"
    }
}

You can pass all the options supported by the FFmpeg library. Refer to the FFmpeg documentation for information about the available options.

Use the videoPath option to enable video recording.

CLI: --video-encoding-options
API: runner.video

quarantineMode

Enables the quarantine mode for tests that fail.

{
    "quarantineMode": true
}

CLI: -q, --quarantine-mode
API: runner.run({ quarantineMode })

debugMode

Runs tests in the debugging mode.

{
    "debugMode": true
}

See the --debug-mode command line parameter for details.

CLI: -d, --debug-mode
API: runner.run({ debugMode })

debugOnFail

Specifies whether to automatically enter the debug mode when a test fails.

{
    "debugOnFail": true
}

If this option is enabled, TestCafe pauses the test when it fails. This allows you to view the tested page and determine the cause of the fail.

When you are done, click the Finish button in the footer to end test execution.

CLI: --debug-on-fail
API: runner.run({ debugOnFail })

skipJsErrors

Ignores JavaScript errors on a webpage.

{
    "skipJsErrors": true
}

When a JavaScript error occurs on a tested web page, TestCafe stops test execution and posts an error message and a stack trace to a report. To ignore JavaScript errors, set the skipJsErrors property to true.

CLI: -e, --skip-js-errors
API: runner.run({ skipJsErrors })

skipUncaughtErrors

Ignores uncaught errors and unhandled promise rejections in test code.

{
    "skipUncaughtErrors": true
}

When an uncaught error or unhandled promise rejection occurs on the server during test execution, TestCafe stops the test and posts an error message to a report. To ignore these errors, use the skipUncaughtErrors property.

CLI: -u, --skip-uncaught-errors
API: runner.run({ skipUncaughtErrors })

filter

Allows you to specify which tests or fixtures to run. Use the following properties individually or in combination.

filter.test

Runs a test with the specified name.

{
    "filter": {
        "test": "Click a label"
    }
}

CLI: -t, --test
API: runner.filter

filter.testGrep

Runs tests whose names match the specified grep pattern.

{
    "filter": {
        "testGrep": "Click.*"
    }
}

CLI: -T, --test-grep
API: runner.filter

filter.fixture

Runs a fixture with the specified name.

{
    "filter": {
        "fixture": "Sample fixture"
    }
}

CLI: -f, --fixture
API: runner.filter

filter.fixtureGrep

Runs tests whose names match the specified grep pattern.

{
    "filter": {
        "fixtureGrep": "Page.*"
    }
}

CLI: -F, --fixture-grep
API: runner.filter

filter.testMeta

Runs tests whose metadata matches the specified key-value pair.

{
    "filter": {
        "testMeta": {
            "device": "mobile",
            "env": "production"
        }
    }
}

This configuration runs tests whose metadata's device property is set to mobile, and env property is set to production.

CLI: --test-meta
API: runner.filter

filter.fixtureMeta

Runs tests whose fixture's metadata matches the specified key-value pair.

{
    "filter": {
        "fixtureMeta": {
            "device": "mobile",
            "env": "production"
        }
    }
}

This configuration runs tests whose fixture's metadata has the device property set to mobile and the env property set to the production.

CLI: --fixture-meta
API: runner.filter

appCommand

Executes the specified shell command before running tests.

{
    "appCommand": "node server.js"
}

Use the appCommand property to launch the application you are going to test. This application is automatically terminated after testing is finished.

The appInitDelay property specifies the amount of time allowed for this command to initialize the tested application.

TestCafe adds node_modules/.bin to PATH so that you can use the binaries the locally installed dependencies provide without prefixes.

CLI: -a, --app
API: runner.startApp

appInitDelay

Specifies the time (in milliseconds) allowed for an application launched using the appCommand option to initialize.

TestCafe waits for the specified time before it starts running tests.

{
    "appCommand": "node server.js",
    "appInitDelay": 3000
}

Default value: 1000

CLI: --app-init-delay
API: runner.startApp

concurrency

Specifies the number of browser instances that should run tests concurrently.

{
    "concurrency": 3
}

TestCafe opens several instances of the same browser and creates a pool of browser instances. Tests are run concurrently against this pool, that is, each test is run in the first free instance.

See Concurrent Test Execution for more information about concurrent test execution.

CLI: -c, --concurrency
API: runner.concurrency

selectorTimeout

Specifies the time (in milliseconds) within which selectors attempt to obtain a node to be returned. See Selector Timeout for details.

{
    "selectorTimeout": 3000
}

Default value: 10000

CLI: --selector-timeout
API: runner.run({ selectorTimeout })

assertionTimeout

Specifies the time (in milliseconds) TestCafe attempts to successfully execute an assertion if a selector property or a client function was passed as an actual value. See Smart Assertion Query Mechanism.

{
    "assertionTimeout": 1000
}

Default value: 3000

CLI: --assertion-timeout
API: runner.run({ assertionTimeout })

pageLoadTimeout

Specifies the time (in milliseconds) passed after the DOMContentLoaded event, within which TestCafe waits for the window.load event to fire.

After the timeout passes or the window.load event is raised (whichever happens first), TestCafe starts the test.

{
    "pageLoadTimeout": 1000
}

Default value: 3000

See the command line --page-load-timeout parameter for details.

CLI: --page-load-timeout
API: runner.run({ pageLoadTimeout })

speed

Specifies the test execution speed.

Tests are run at the maximum speed by default. You can use this option to slow the test down.

Provide a number between 1 (the fastest) and 0.01 (the slowest).

{
    "speed": 0.1
}

Default value: 1

If the speed is also specified for an individual action, the action's speed setting overrides the test speed.

CLI: --speed
API: runner.run({ speed })

port1, port2

Specifies custom port numbers TestCafe uses to perform testing. The number range is [0-65535].

{
    "port1": 12345,
    "port2": 54321
}

TestCafe automatically selects ports if ports are not specified.

CLI: --ports
API: createTestCafe

hostname

Specifies your computer's hostname. It is used when you run tests in remote browsers.

{
    "hostname": "host.mycorp.com"
}

If the hostname is not specified, TestCafe uses the operating system's hostname or the current machine's network IP address.

CLI: --hostname
API: createTestCafe

proxy

Specifies the proxy server used in your local network to access the Internet.

{
    "proxy": "proxy.corp.mycompany.com"
}
{
    "proxy": "172.0.10.10:8080"
}

You can also specify authentication credentials with the proxy host.

{
    "proxy": "username:password@proxy.mycorp.com"
}

CLI: --proxy
API: runner.useProxy

proxyBypass

Specifies the resources accessed bypassing the proxy server.

{
    "proxyBypass": "*.mycompany.com"
}
{
    "proxyBypass": ["localhost:8080", "internal-resource.corp.mycompany.com"]
}

See the --proxy-bypass command line parameter for details.

CLI: --proxy-bypass
API: runner.useProxy

ssl

Provides options that allow you to establish an HTTPS connection between the client browser and the TestCafe server.

{
    "ssl": {
        "pfx": "path/to/file.pfx",
        "rejectUnauthorized": true
    }
}

See the --ssl command line parameter for details.

CLI: --ssl
API: createTestCafe

developmentMode

Enables mechanisms to log and diagnose errors. You should enable this option if you are going to contact TestCafe Support to report an issue.

{
    "developmentMode": true
}

CLI: --dev
API: createTestCafe

qrCode

If you launch TestCafe from the console, outputs a QR-code that represents URLs used to connect the remote browsers.

{
    "qrCode": true
}

CLI: --qr-code

stopOnFirstFail

Stops an entire test run if any test fails.

{
    "stopOnFirstFail": true
}

CLI: --sf, --stop-on-first-fail
API: runner.run({ stopOnFirstFail })

color

Enables colors in the command line.

{
    "color": true
}

CLI: --color

noColor

Disables colors in the command line.

{
    "noColor": true
}

CLI: --no-color