Profiling Forge App Memory to Reduce Compute Costs

Dr. Shu Shen
January 7, 2026

With Atlassian Forge officially transitioning to a consumption-based pricing model in 2026, efficiency is now a core engineering requirement. Under the new model, your app's cost is tied directly to GB-seconds: the product of your execution time and your allocated memory.

While execution time is often the focus of optimization, Memory Allocation is an equally critical lever for cost control. Many developers leave their apps at the default 512MB setting, essentially paying for "empty air."

By profiling real-world memory usage, you can safely dial down your allocations and protect our bottom line.

Why Memory Allocation Matters for Your Bill

Because Forge’s runtime is built on serverless infrastructure, the economic principles of AWS Lambda apply here as well. A recent article on dev.to provides excellent insights into how memory allocation drives serverless pricing.

The takeaway is simple: Your costs scale with your reservation, not just your usage. In Forge, if you allocate 512MB but your code only ever needs 200MB, you are being billed for that unused 312MB every single time the function runs. Narrowing this gap is one of the most effective ways to reduce your GB-second consumption.

How to Profile Memory in Forge

Atlassian’s Developer Console reports invocation counts and success rates, but it doesn't currently provide a "Maximum RAM Used" metric for successful calls. To find the truth, you have to instrument your code using RSS (Resident Set Size) via process.memoryUsage.rss().

Tracking Peak Usage

To understand your memory requirements, you can track the peak RSS observed during the lifetime of a specific execution environment using a simple module-level variable.

// memProfile.js
let peakMemory = 0;

export const monMemUsage = (label = "Checkpoint") => {
  const mem = process.memoryUsage.rss();
  const memMB = Math.round(mem / 1024 / 1024);
  
  if (memMB > peakMemory) {
    peakMemory = memMB;
  }
  
  console.log(`[Memory] ${label} - Current: ${memMB}MB, Peak Observed: ${peakMemory}MB`);
};

Instrumenting a Resolver

To get an accurate profile, call the monitor at the entry and exit points of your resolvers, as well as after any memory-intensive operations (like fetching large datasets from the Jira API).

import Resolver from '@forge/resolver';
import { monMemUsage } from './memProfile';

const resolver = new Resolver();

resolver.define('getData', async (req) => {
  monMemUsage('Resolver Start');
  
  const data = await someHeavyApiCall();
  monMemUsage('Post API Fetch');
  
  const result = transformData(data);
  monMemUsage('Resolver End');
  
  return result;
});

export const handler = resolver.getDefinitions();

Understanding the Scope: Per-Process Profiling

It is important to remember that this memory tracking is per-process. When your Forge app handles multiple requests, the Forge platform may utilize concurrent processes. Each process maintains its own module-level state in its own memory space.

  • Consolidation: Because you cannot see a single "global" peak across all users, you should aggregate your logs to see the spread of peaks across different processes.
  • Statistical Alignment: If your workload is uniform, the peaks across different processes will eventually align over a large enough sample space.
  • Beware the "Black Swan": Use caution if your app has rarely executed code paths. In engineering and risk management, these are known as Black Swans - events that occur rarely but have a massive impact. For example, an "Export All Data" button might use 450MB, even if your "typical" peak is only 180MB. If you optimize based only on average usage, those rare tasks will crash with Out-of-Memory (OOM) errors.

Determining Your "Safe" Allocation

Once you have identified your peak memory usage through profiling, the temptation is to set your memoryMB in the manifest.yml as close to that number as possible. However, setting the limit too tight is dangerous for several reasons:

  1. RSS vs. Platform Limits: process.memoryUsage.rss() is an internal measurement of the Node.js process. While it is the closest metric available to us, it is not always a 1:1 match for the total memory allocation limits enforced by the Forge platform.
  2. V8 Garbage Collection: Node.js memory usage is not static. The V8 engine may delay garbage collection, causing temporary spikes in memory usage that exceed your "normal" high watermark.
  3. Headroom for Growth: Even if your code is efficient today, you need a buffer for unusual data payloads or future minor code changes that might slightly increase the memory footprint.

The Staging and Verification Phase

Before rolling out new memory settings to production, it is vital to let the application "bake" in a staging environment. This phase allows you to verify your assumptions against a variety of workloads without impacting end-users.

  • Simulate Load: Run your most intensive operations multiple times to see if the peak memory usage remains stable or if it creeps up during warm starts.
  • Monitor for Failures: Keep a close eye on the logs for any Out-of-Memory (OOM) errors. If you see crashes in staging, your headroom is too small.
  • Verify "Rare" Paths: Manually trigger those less-frequent code paths (the "Black Swans") to ensure they can still execute within the new proposed limits.

Conclusion: Achieving a 40% Cost Reduction

In our profiling, we found our peak RSS consistently stayed around 180MB for standard operations. By accounting for the headroom factors and measurement variances mentioned above, we chose to set memoryMB to 300MB in the manifest.yml.

This provided a healthy 120MB buffer for spikes, while still delivering a 40% reduction in our billable compute footprint compared to the 512MB default.

app:
  runtime:
    name: nodejs22.x
    memoryMB: 300 # Optimized from 512MB default

By combining targeted instrumentation with a conservative buffer and a rigorous staging phase, you can confidently optimize your Forge footprint for the new consumption-based world. Deploy your changes, watch the logs, and enjoy the efficiency gains.

Take Control of Your Data Today