Incorporating build steps for actions and web content
The web directory and any directory that represents an action can be built automatically as part of deployment. You can trigger this behavior in one of two ways.
By placing a file called
build.sh(for Mac or Linux), or
build.cmd(for Windows), or both, in each directory in which you want builds to occur. This could be the web directory for web content or any of the action directories. Each build file should contain a shell script to execute with the directory in which it’s placed as the current directory. Note: If both .sh and .cmd files are provided, only the one appropriate for the current operating system will be used. If only one is provided, the deployer will run on systems for which that kind of script is appropriate and indicate an error on other systems.
By placing a
package.jsonfile in in each directory in which you want builds to occur. The presence of this file causes one of the following commands to be executed.
One of the
npm commands is used by default but you can cause
yarn to be used instead by using the flag
--yarn on the
nim project deploy command. One of the former two commands is used if
package.json does not contain a
build script. The presence of a
build script in
package.json causes one of the latter two commands to be used.
build.* triggers take precedence over the
package.json trigger. If a script is found, only the script is executed. Of course, the script can always employ
yarn commands as needed.
Because the interpretation of the
package.json trigger depends on what is in
package.json, when a build script is found in
package.json, the "dev dependencies" are included in
node_modules and when there is no such script, those dependencies are not included. This behavior corresponds to the most common expected use cases. If it does not correspond to your needs, you can use a
build.sh and or
build.cmd to trigger the build and include the exact sequence of commands in that script that you will need.
build.cmd are automatically ignored and do not have to be listed in
package.json is not automatically ignored in this way.
Building precedes the determination of what web files to upload or which action directories to zip into the action. This has two implications:
- You can optionally use the script to generate the
.ignorefile that refines this process.
- If the build is designed to produce a .zip file directly, you must ensure that there are no other files that will be interpreted as a part of the action, or else the deployer will do its own zipping. The easiest way to ensure that there is only one zip file to consider is to use a one-line
.includefile pointing to the zip file.
Examples of building (common use cases)
Let’s start with a simple Node.js dependency.
qrcode includes a Node.js function in a single source file. This function depends on npm packages. You can clone this example from GitHub as follows:
Here is a part of the project layout:
qrcode to show all the automation that takes place and how that reduces the amount of work you have to do to a great extent.
The presence of package.json triggers the npm install, after which the normal behavior for multifile actions (autozipping) takes over and creates a zip file to upload that contains qr.js, package.json, and the entire node_modules directory generated by npm. If you try this yourself with your own project, bear in mind that the runtime for Node.js requires either that package.json provide an accurate designation of the main file or else that the file be called index.js.
Errors in builds
The deployer decides whether a build has failed based on examining the return code from a subprocess running the build. When that code is non-zero, the deployer displays all output of the build subprocess, both on
stdout and on
stderr. However, if the build with code zero, the deployer does not display any output. Therefore, it’s good practice to ensure that a build will set a nonzero exit code on failure.
Tip: If you suspect a build is not doing what you expect but there is no visible error, try rerunning
nim project deploy with the
--verbose-build flag. This causes the entire build output to display on the console, regardless of build status on exit. This will often reveal errors in the build that are being swallowed because the build is exiting with status code zero despite the errors.
Build states and the effect of --incremental on builds
--incremental option has an effect on whether or not builds are executed. To understand how this works, it’s important to understand build state, because an incremental deploy only rebuilds web or action if it's in the unbuilt state.
Two build states: Built and Unbuilt
A Built or Unbuilt state is applied to the following directory types:
- Each action that has a build step
- The web directory
In this discussion, we'll refer to "the directory" to mean either of these two directory types.
If the directory is in the unbuilt state, the build is run as usual, prior to determining whether something should be deployed when deploying incrementally.
If the directory is in the built state, incremental deployment proceeds to deciding whether a change has occurred without running the build again.
What determines built or unbuilt states
Build state is determined as follows:
If the build is triggered by
package.json does not contain a
build script, the directory is considered built if and only if:
- It contains a package-lock.json (if run with
npm) or yarn.lock (if run with
yarn) and a node_modules directory, both of which are newer than the package.json. If both package-lock.json and yarn.lock are present, the newer of the two is used in this determination.
If the build employs a script, including the case where
package.json includes a
build script, then the directory is considered built if and only if:
- The directory containing the script also contains a file called
In the script case, the convention of using a
.built marker to suppress subsequent builds requires the script to set this marker when it executes. It’s a very coarse-grained heuristic, which is used because:
- The deployer doesn’t know the dependencies of the build.
- It’s better to err in the direction of efficiency when doing incremental deploying.
If you have problems with an incremental build, you always have the remedy of running a full deploy.
Note: The use of this convention of using the
.built marker is optional. If the script does not create a
.built marker, it always runs, which is fine if the script does dependency analysis and rebuilds only what it needs to.
The package.json case also employs a heuristic and we can't guarantee perfect accuracy. However, it works well in simple cases. Again, you always have the fallback of running a full deploy.
By default, building is a local operation. However, by specifying
nim project deploy command you can cause the build to be performed in a runtime container.
- For actions, the runtime will be the same as the one that will be used to run the action.
- For web content, the remote build will always run in the
Note: Our runtime containers are Linux systems. Thus, build scripts for remote building should be
build.sh and not
Possible reasons why you may want to do this are
- the action compiles to native binary and running the build locally might produce the wrong result
- this is currently an issue with the
swiftruntimes and may extend to others over time
- this is currently an issue with the
- the build may incorporate native binaries through dependencies
- this is potentially true even of interpretive languages
- you are initiating the build from the workbench where local storage is not available to run the build
- in fact, in the workbench,
--remote-buildis on by default
- in fact, in the workbench,
A build will not work remotely unless you have storage credentials. You can check this with
The response should be
true. Otherwise, you can request that storage be added to your namespace. Once this has been done, use
to catch up your local credential store with the change.
A build will not work remotely if the
.include directive denotes any files or directories outside the directory being built. If there is already an
.include directive that violates this rule when you issue
nim project deploy --remote-build, you will get an early diagnostic. If the build itself generates or modifies the
.include, after which it violates this rule, the remote build will fail with a (possibly less helpful) diagnostic.
It is an objective to liberalize this restriction in the future, to allow resources located elsewhere in the project. There is no plan to support remote builds including resources that lie outside the project entirely.
Switching between local and remote builds
Often a build produces a lot of artifacts. Some may be included in the deployed action. Some are temporary. None are needed to run the next build. On a remote build, everything in the action or web directory must be uploaded to the cloud where the runtime container resides. If you have built an action or the web directory locally, you should clean it before initiating a remote build that will affect the same directory. The deployer may abort with a message to this effect if it detects that it is being asked to do a suspicious large upload.
Default remote builds
Certain runtimes have "default remote builds" which will occur if you specify
--remote-build but do not have a
build.sh for the action or web content in question. Currently, those runtimes are
swift:default. They will be made available for all runtimes that require a native binary as the final artifact for the action.
The default build will do what the runtime would do if you submitted a source file (or a zip containing source files organized appropriately for the language) directly, except that
- when you submit source files directly, they are stored as the code of the action and compiled each time the action is invoked
- if you do the default remote build, the compilation will be run at build time and what is stored for the action is the binary code.
The latter is, obviously, more performant.
If you specify a
build.sh for a remote build, even in languages that have default remote builds, it replaces rather than augmenting the default remote build. This can make the
build.sh hard to craft because details encapsulated in the default remote build can be non-obvious without knowledge of OpenWhisk runtime internals. You can have the best of both worlds by invoking the default remote build explicitly as a step (usually the final step) of your
build.sh. The syntax is