Skip to content

Programmable HTTP Access Phase and Request Body Reading#1044

Merged
xeioex merged 4 commits intonginx:masterfrom
xeioex:nginx_body_routing_done
May 6, 2026
Merged

Programmable HTTP Access Phase and Request Body Reading#1044
xeioex merged 4 commits intonginx:masterfrom
xeioex:nginx_body_routing_done

Conversation

@xeioex
Copy link
Copy Markdown
Contributor

@xeioex xeioex commented Apr 3, 2026

Programmable HTTP Access Phase and Request Body Reading

Overview

Three new capabilities let JavaScript handlers participate in request
processing before the content phase begins:

  • js_access -- registers a handler in the HTTP access phase for
    authorization, routing, and request preprocessing.
  • r.readRequestText(), r.readRequestArrayBuffer(),
    r.readRequestJSON()
    -- async methods that read and cache the
    request body, available in any HTTP handler.
  • r.readRequestForm() -- async method that parses the request
    body as an HTML form (application/x-www-form-urlencoded and
    multipart/form-data) and returns a structured accessor object.

Together they enable decisions based on headers, arguments, variables,
and the request body -- all resolved before content generation or
proxying starts.

js_access Directive

js_access <module.function>;

Context: http, server, location

The handler runs in NGX_HTTP_ACCESS_PHASE, before content handlers
(js_content, proxy_pass, fastcgi_pass, etc.) and after built-in
access checkers (allow/deny, auth_basic, auth_request).
Configuration inherits from outer to inner blocks.

  • Subrequests are skipped -- only the main request invokes the
    handler.
  • A synchronous handler can set variables, call r.return(status)
    to reject the request, or simply return to continue to the next phase.
  • An async handler (returning a Promise) suspends the request until
    the Promise settles, enabling ngx.fetch(), r.subrequest(),
    setTimeout(), and body reading without blocking the event loop.
  • An unhandled exception produces 500 Internal Server Error.
  • If the handler does not call r.return(), processing continues
    normally to the content phase (the handler returns NGX_OK to the
    access phase checker).
  • r.decline() signals "no opinion" (NGX_DECLINED), deferring the
    access decision to other checkers. Useful with the satisfy any
    directive when the handler should not grant or deny by itself.
  • r.return(301|302|303|307|308, url) sends a redirect directly
    to the client, enabling authentication flows (OIDC, SAML) that
    redirect unauthenticated users to an identity provider.

Interaction with satisfy

js_access participates in the satisfy directive like other access
phase modules:

  • satisfy all (default) -- all access checkers must allow. A
    handler that completes normally returns NGX_OK (allow); r.return(403)
    denies; r.decline() passes through without granting or denying.
  • satisfy any -- at least one checker must allow. NGX_OK from
    js_access grants access even if deny all is configured.
    r.decline() expresses "no opinion", letting other modules decide.
JS handler action Phase return satisfy all satisfy any
normal return NGX_OK allow, next checker grant, skip rest
r.return(403) 403 deny save error, try next
r.decline() NGX_DECLINED next checker next checker

Request Body Reading

Three async methods read and cache the request body:

Method Returns
r.readRequestText() Promise<string>
r.readRequestArrayBuffer() Promise<ArrayBuffer>
r.readRequestJSON() Promise<object>
  • The body is read once and cached; repeated calls return the same data.
  • Concurrent reads from different methods are rejected with an error.
  • The body remains available for downstream phases (proxy_pass
    forwards it unchanged).
  • Works with chunked transfer encoding, large bodies (respects
    client_body_buffer_size), client_body_in_file_only, and
    client_max_body_size enforcement.
  • Available in both js_access and js_content handlers.

Form Parsing

r.readRequestForm([options]) parses the request body as an HTML
form and resolves to a form object:

let form = await r.readRequestForm({maxKeys: 64});

Supported content types:

  • application/x-www-form-urlencoded
  • multipart/form-data

Options:

  • maxKeys (default 128) -- caps the total number of fields; exceeding
    it rejects the Promise with an error.

Form object accessors:

Method Returns
form.get(name) first value of name, or null
form.getAll(name) array of all values of name
form.has(name) true if name has at least one value
form.forEach(cb) iterates (value, key) pairs in order
form.hasFiles() true if any part has a filename
form.fileFieldNames() names of file parts (with duplicates)

Behavior:

  • The result is cached -- subsequent calls return the same object
    regardless of options.
  • Concurrent calls with body readers (readRequestText() etc.) are
    rejected.
  • Errors (unsupported Content-Type, malformed body, maxKeys exceeded)
    reject the Promise; TypeError for content-type problems, plain
    Error for parse failures.
  • File parts are detected but their contents are not exposed; only
    field names are reported via hasFiles() and fileFieldNames().
    A proper File API with streaming Blob semantics for bodies larger
    than memory is a significant amount of work; workloads that consume
    file contents should forward the request via proxy_pass.
  • Available in both js_access and js_content handlers; the parsed
    body is preserved and forwarded to downstream phases unchanged.

Examples

Authentication with an External Service

async function auth(r) {
    let resp = await ngx.fetch(
        `http://auth-service/check?token=${r.args.token}`);

    if (resp.status !== 200) {
        r.return(resp.status);
        return;
    }

    r.variables.user = await resp.text();
}
location /api/ {
    js_access auth.auth;
    proxy_pass http://backend;
}

The access handler calls an auth service via ngx.fetch(). On failure
the request is rejected immediately; on success a variable is set for
downstream use. The content handler (proxy_pass) only runs after
authentication succeeds.

Authentication with a Subrequest

async function auth(r) {
    let reply = await r.subrequest(
        '/auth_check?token=' + r.args.token);

    if (reply.status !== 200) {
        r.return(reply.status);
        return;
    }

    r.variables.user = reply.responseText;
}

Same pattern using an internal subrequest instead of an outbound fetch.

Dynamic Upstream Routing

function route(r) {
    r.variables.upstream = (r.args.dest === 'one')
        ? '127.0.0.1:8081' : '127.0.0.1:8082';
}
js_var $upstream;

location /route {
    js_access test.route;
    proxy_pass http://$upstream;
}

The access handler computes a routing variable synchronously; proxy_pass
evaluates it after the access phase completes.

Body-Based Access Control

async function body_gate(r) {
    let body = await r.readRequestJSON();

    if (body.role === 'admin') {
        r.return(403);
        return;
    }

    r.variables.foo = body.method + ':' + body.name;
}
js_var $foo;

location /api {
    js_access policy.body_gate;
    proxy_pass http://backend;
}

The request body is parsed as JSON in the access phase. The policy
decision is made before the request reaches the backend. The body is
preserved and forwarded to proxy_pass unchanged.

Body-Driven Routing

const backends = {
    us: '127.0.0.1:8081',
    eu: '127.0.0.1:8082',
};

async function route_by_body(r) {
    let body = await r.readRequestJSON();

    r.variables.upstream = backends[body.region]
                           || '127.0.0.1:8083';
}
js_var $upstream;

location /route {
    js_access routing.route_by_body;
    proxy_pass http://$upstream;
}

Combines body reading with dynamic routing: the request is parsed once
and the upstream is selected based on a field in the payload.

Form-Based Access Control

async function login_gate(r) {
    let form = await r.readRequestForm();

    let user = form.get('username');
    let token = form.get('csrf');

    if (!user || token !== r.variables.cookie_csrf) {
        r.return(403);
        return;
    }

    r.variables.user = user;
}
js_var $user;

location /login {
    js_access auth.login_gate;
    proxy_pass http://backend;
}

A classic HTML form (application/x-www-form-urlencoded) is parsed
in the access phase. The handler enforces a CSRF token check against
a cookie and forwards the validated request. The body is preserved
for proxy_pass.

Reject File Uploads at the Edge

async function no_uploads(r) {
    let form = await r.readRequestForm({maxKeys: 32});

    if (form.hasFiles()) {
        r.return(403, 'file uploads are not allowed here');
        return;
    }
}
location /api/comments {
    client_max_body_size 64k;
    js_access policy.no_uploads;
    proxy_pass http://backend;
}

hasFiles() lets a handler reject multipart requests that contain
file parts without buffering the file contents. Combined with
client_max_body_size it provides a cheap edge-side filter against
unwanted uploads.

Multipart Field Inspection

async function route_by_form(r) {
    let form = await r.readRequestForm({maxKeys: 16});

    r.variables.upstream = form.has('priority')
        ? '127.0.0.1:8081' : '127.0.0.1:8082';

    r.log('files: ' + form.fileFieldNames().join(','));
}
js_var $upstream;

location /submit {
    js_access routing.route_by_form;
    proxy_pass http://$upstream;
}

Both multipart/form-data and urlencoded forms are handled
uniformly. fileFieldNames() exposes which fields carried files
(useful for logging, metrics, or routing) without reading their
contents.

Variable Enrichment

function enrich(r) {
    r.variables.foo = 'client=' + r.remoteAddress
                      + ';method=' + r.method;
}
js_var $foo;

server {
    js_access utils.enrich;

    location / {
        proxy_set_header X-Request-Info  $foo;
        proxy_pass http://backend;
    }
}

A server-level access handler enriches every request with computed
variables. Downstream locations consume them for header injection,
logging, or routing without duplicating the logic.

Access-Phase Redirect

async function auth(r) {
    let resp = await ngx.fetch(
        `http://auth-service/validate?token=${r.args.token}`);

    if (resp.status !== 200) {
        r.return(302, 'https://idp.example.com/login?rd=' + r.uri);
        return;
    }

    r.variables.user = await resp.text();
}
location /protected/ {
    js_access auth.auth;
    proxy_pass http://backend;
}

When authentication fails the handler redirects the client to an
identity provider. On success the request continues to proxy_pass.

Combining with satisfy any

function check_api_key(r) {
    if (r.headersIn['X-API-Key'] === 'secret') {
        return;       /* NGX_OK -- grant access */
    }

    r.decline();      /* no opinion, let other checkers decide */
}
location /api/ {
    satisfy any;

    allow 192.168.1.0/24;
    deny all;

    js_access auth.check_api_key;
    proxy_pass http://backend;
}

Requests with a valid API key are allowed regardless of IP. Without a
key the handler declines and the IP-based allow/deny rules apply.

@xeioex xeioex changed the title Nginx body routing done Programmable HTTP Access Phase and Request Body Reading Apr 3, 2026
@xeioex xeioex force-pushed the nginx_body_routing_done branch 2 times, most recently from 3145d7c to a6f3608 Compare April 6, 2026 23:42
@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented Apr 6, 2026

Hi @lancedockins,

you may find it useful.

@lancedockins
Copy link
Copy Markdown

lancedockins commented Apr 7, 2026

Thanks, @xeioex. This is awesome. Since I first raised that suggestion, we had to find a workaround to do the routing and protection that we needed but I'd certain prefer to do this via an access phase filter than via a workaround.

Off the cuff, the only thing that sounds like it's overtly missing from this sort of implementation is a body "parser". OpenResty has this for url encoded body types:
https://github.com/openresty/lua-nginx-module?tab=readme-ov-file#ngxreqget_post_args

While it's certainly great if you can constrain your use of this function to something like JSON bodies, that's not really up to us. We have to work with POST bodies of varying types.

That might dictate the need for a few other attributes or utility functions. It certainly did for us when we created this type of logic.

In our case, we needed this regardless of form submission type - not because we're concerned about the attachments that come with the uploads but because we care about the standard field types that precede those attachments. Because of that, we had to write our own body parse logic that determines whether a POST is url encoded or multipart and then populates an object with key value pairs based on that non-attachment data. For our purposes, we "discard" the attachment in the sense that we don't try to parse that into a JSON object (obviously we don't strip the attachment). If you were to institute this, you might need to add a configurable threshold to limit the size of POST bodies that you try to parse. That's what we did. We had to construct body size detection functions that return the size or false depending on whether they exceed the threshold or not.

Ultimately we were able to achieve all of our existing implementations through NJS and creative Nginx configuration. But I suspect that it would be better and faster as a proper C implementation.

Functions that I would consider important for this feature to be "complete":

  • Some sort of body parse function for plain text POST bodies - either for url encoded POST types or the url encoded portions of multipart form submissions
  • A body size attribute or function
  • A configurable threshold for body parsing to prevent CPU exhaustion trying to parse excessively large POST bodies
  • A POST body hashing function that can be used to calculate something like an MD5 hash of the POST body. Obviously this would work in tandem with the POST body threshold limit so that it doesn't try to hash excessively large POST bodies. If you're trying to use the access phase filters for security or access gating, that hash can be useful

Beyond that, I don't really see any issues that would prevent us from using this in the way that we intend.

The only question that I have with what you've shared is the sync vs async nature of some elements here. It appears that you're saying that any "body read" would require an async call. That makes sense given the I/O involved, but is it correct to say that it's only async insofar as it needs to be to read or process the backend I/O? In other words, it's still ultimately a blocking filter? It just refrains from blocking other request processing that could occur within the access phase? But hypothetically if the backend I/O was particularly slow, it would eventually net out to a situation where everything that is non-blocking in the access phase completes and the backend I/O call ultimately becomes a blocking event in the access phase rather than passing the request onto the next phase? Given the intent of this sort of filter, that would be my assumption as you obviously wouldn't want a slow backend process to function like an access phase bypass.

I may have some other thoughts here. But those are my initial thoughts and those are coming from heavy use of this type of functionality in our stack (albeit via workarounds and custom NJS code).

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented Apr 8, 2026

@lancedockins

, it's still ultimately a blocking filter?

No, this is a non-blocking filter. sync vs async is purely JS syntax part. using await readRequestText() required the access handler to be declared as async function.

@lancedockins
Copy link
Copy Markdown

@lancedockins

, it's still ultimately a blocking filter?

No, this is a non-blocking filter. sync vs async is purely JS syntax part. using await readRequestText() required the access handler to be declared as async function.

So just to be clear, when you say that it is non-blocking, does that mean that race conditions are possible? Like hypothetically if your access filter was slow, could the content phase be reached such that the access filter never declines access even if it would ultimately resolve to do so? An access filter that could be bypassed strictly by a race condition would not be an access filter that I would trust sufficiently to use it. So that is a significant detail that I want to understand correctly.

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented Apr 8, 2026

@lancedockins

Thanks, this is very helpful feedback.

A few points from the current implementation/API surface:

  • Body hashing is also already possible in JS today after reading the body, using either the legacy crypto module (md5, sha1, sha256) or WebCrypto for SHA-family digests.

I am planning to add FormData()-like api to support both application/x-www-form-urlencoded and multipart/form-data as r.readRequestForm()

It should close the Form gap.

A POST body hashing function that can be used to calculate something like an MD5 hash of the POST body.

How do you imagine it to work? read the body -> hash or something else?

A body size attribute or function

will let body = await r.readRequestBody(); body.length be enough? We Content-length we know it already from r.headersIn{}, with chunked encoding we have to read it first.

I am planning to add a streamable body API, but not in this scope. maybe this will help.

@lancedockins
Copy link
Copy Markdown

Thanks, @xeioex.

Understood on body hashing. That's what we do now so I guess that's technically a duplicate feature request. This API can certainly live without that. Basically we do read the body and then hash it (excluding attachments).

I don't think that everyone will always need or want the body w/ its attachments. Speaking for our use, we mainly need the key/value data in the form POST rather than the attachments, but there are certainly situations where the attachment part of the client body could be important too so there might need to be a way to retrieve the attachment (optionally) for some users. Since Lua doesn't include that in their get_post_args function either and it seems like more of a niche need, though, I don't know that it strikes me as a requirement so much as a nice eventuality.

Regarding FormData or something similar that sort of parsing functionality is absolutely critical for this from my perspective so I think that that's a great and necessary addition. Admittedly, I'm not a fan of the FormData API. It's overly cumbersome as it forces key/value pair requests through getters and setters rather than just treating the data like an object that inherently contains key value pairs that you can call via objectname.property or object name['property']. I think it's geared more towards form data manipulation rather than basic read operations. A standard JS object would be a lot more natural here unless you're intending to support POST body data writes for things like subrequests (e.g. modify the POST body data before submitting it to a subrequest). I can't personally think of a use case for modifying the POST body before submission upstream. That seems like more of a bad idea or niche case than not to me.

Regarding body size, technically your body.length recommendation would work but it wouldn't actually fulfill our use case to do it that way. Right now we have to use client_body_in_file_only and the file system API to determine actual POST body sizes. Then we choose whether to parse or not based on its size. Essentially, we're trying to avoid expending significant CPU time and memory on body parsing for very large client POST bodies. If I'm following your explanation correctly, it sounds like readRequestBody() would first read the data of the POST into memory and then require a size calculation. So if the end goal is to avoid excessive CPU and memory use from pulling the full POST body into memory and parsing, that method would not achieve that. Given that, I suppose that if there was a configurable attribute for this functionality that you could use to determine whether NJS should try to parse or read the body or not, that is something that you could solve via configuration for this module rather than by exposing the size of the POST body as a property. But I can think of other situations where I would still want to know the POST body size. To your point, though, I suppose Content-Length does sort of answer that as well. Either way, though, some sort of limiting threshold seems appropriate prevent excessive resource consumption with this.

Other questions about how this would all work?

  1. Did you see my question about race condition potential with this as a non-blocking filter? I just want to be sure that I fully understand what we're dealing with here since an access phase filter would need to guarantee a final allow/deny regardless of any potential delays in that phase (rather than proceeding through the content phase and serving content).
  2. Would this js_access filter functionality require the use of client_body_in_file_only or some similar configuration in order to read the POST body args? Or with your intended strategy, would this be able to work independently of that setting?

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented Apr 8, 2026

@lancedockins

So just to be clear, when you say that it is non-blocking, does that mean that race conditions are possible? Like hypothetically if your access filter was slow, could the content phase be reached such that the access filter never declines access even if it would ultimately resolve to do so?

no, this is not possible.

Would this js_access filter functionality require the use of client_body_in_file_only or some similar configuration in order to read the POST body args? Or with your intended strategy, would this be able to work independently of that setting?

no need for client_body_in_file_only, yes.

@lancedockins
Copy link
Copy Markdown

@xeioex thank you.

Given all of that and what you've said you plan to do with this so far, the only remaining thought that I have that I wouldn't be able to solve with what already exists in NJS and what you're adding or planning to add is the defensive limits on POST body parsing. Just to clarify a bit further on that, I realize that you can use client_max_body_size but the concern that I have is independent from that. If we have a high client_max_body_size value to allow for large attachment uploads, you could end up with different mixes of url encoded data vs attachment data. Technically for a 100M POST body, the POST body could be 99M of attachments and 1M of url encoded data or the reverse. It would probably be fine to parse the 1M of url encoded data in a 100M submission. But parsing out 99M of url encoded data definitely wouldn't be good.

Obviously a 99M POST of url encoded data isn't likely a common real world scenario. The intent with the limit is to prevent bad actors from DoS'ing the server by tampering with the submission to exploit the resource capacity needed to process and parse the url encoded data.

Hopefully that makes sense.

@drsm
Copy link
Copy Markdown
Contributor

drsm commented Apr 8, 2026

Hi!

Naming -- are readRequestText(), readRequestArrayBuffer(), and
readRequestJSON() clear and consistent? Would you prefer shorter
names or a different convention?

Maybe consider readRequestBody(options) with default { type: "text" }?
This allows to expose a options.maxBodySize param, so one can get some sort of RangeError if it's exeeded.

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented Apr 8, 2026

@drsm

The idea from the current naming is to align it with existing Fetch API.

We cannot use unambiguously text(), arrayBuffer(), or body.text() as in njs r stands for both Request and Response, hence readRequest*() naming.

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented Apr 8, 2026

@lancedockins

I realize that you can use client_max_body_size but the concern that I have is independent from that. If we have a high client_max_body_size value to allow for large attachment uploads, you could end up with different mixes of url encoded data vs attachment data

I am not sure we need a dedicated njs max_body_size limit. Rather, we may add an optional argument for readRequestForm(), similar to maxKeys querystring.parse(), it will have a reasonable limit for the max number of form args. This even looks to me as more appropriate limit for CPU consumption.

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented Apr 8, 2026

@lancedockins

Admittedly, I'm not a fan of the FormData API. It's overly cumbersome as it forces key/value pair requests through getters and setters rather than just treating the data like an object that inherently contains key value pairs that you can call via objectname.property or object name['property']

yes, but what about forms with identical property names?
For example, for querystring.

qs.parse('foo=bar&abc=xyz')
the output is

{
  foo: 'bar',
  abc: 'xyz',
}

qs.parse('foo=bar&abc=xyz&abc=123')
the output is

{
  foo: 'bar',
  abc: ['xyz', '123']
}

if we do the same, a user needs to check first. this is inconvenient.
Where as getAll() always returns an array, you do not need to do extra check. So, if you expect several duplicate prop names you use getAll(), if you do not care, you use get(). Which is a bit more predictable, I think.

@lancedockins
Copy link
Copy Markdown

Fair point. I guess that I've never much cared whether I had to do an additional check on string vs array. It's only a bit different from get vs getAll as you do actually still have to think about whether there might be duplicated key names in the query string to make the get vs getAll decision. So from my vantage point, for that particular case, there's no difference in the inconvenience. You have to think it through either way.

@xeioex xeioex marked this pull request as ready for review April 25, 2026 00:37
@xeioex xeioex force-pushed the nginx_body_routing_done branch from a6f3608 to 8a89155 Compare April 25, 2026 00:39
@xeioex xeioex requested a review from VadimZhestikov April 25, 2026 00:39
@xeioex xeioex force-pushed the nginx_body_routing_done branch from 8a89155 to d079547 Compare April 25, 2026 00:47
@VadimZhestikov
Copy link
Copy Markdown
Contributor

Claude assisted review:

Bugs / Correctness Issues

  1. Boundary character validation is missing (security risk)

The code validates boundary length (≤ 200) but not character validity. RFC 2046 restricts boundary characters to
alphanumerics and a specific set of specials — explicitly excluding \r, \n, and ". The current code admits these:

// ngx_js_form.c — only this check exists:
if (value.len == 0 || value.len > NGX_JS_FORM_MAX_BOUNDARY_LEN) { error }

A client-supplied boundary containing \r\n-- is self-consistent within the parser (same bytes used everywhere), but
the inner scan loop searches for \r\n-- as the part separator prefix. A boundary like x\r\n--y causes the scanner to
find false \r\n-- hits inside the boundary token itself before the ngx_memcmp can reject them. For adversarial inputs
this forces O(n × m) backtracking on every false hit. More seriously it contradicts RFC 2046 §5.1.1. Fix: reject any
boundary byte outside the RFC bcharsnospace set.

  1. Filename value is silently dropped

The filename= parameter is parsed only to set is_file = 1; the string itself is never stored in ngx_js_form_entry_t.
So fileFieldNames() returns the field name attribute (e.g., "attachment"), not the actual uploaded filename (e.g.,
"report.pdf"). A user asking "what files were uploaded and what are they named?" has no way to answer this through the
current API. This is either a usability deficiency that needs a design decision before merge, or it must be clearly
documented as a known limitation.

  1. Empty field names accepted silently in URL-encoded forms

Input =value produces an entry where name.len == 0. The test suite covers missing name= in multipart
Content-Disposition, but not bare =value in URL-encoded bodies. This entry is then reachable via form.get(""). Either
reject it (consistent with the multipart behavior) or document it.

  1. readRequestForm() is not tested in js_access phase

js_request_form.t tests form parsing but through js_content. js_access_body.t tests body reading in the access phase
but only for readRequestText/ArrayBuffer/JSON. The FORM bitflag in the state machine (state & 4) is exercised by a
different code path than the plain body read. There is no test that calls readRequestForm() from a js_access handler,
which is the primary advertised use case.

Incomplete Documentation / API Clarity

  1. "Concurrent reads rejected" semantics need clarification in the TypeScript docs

The TS file says body reading methods work in js_access and js_content. The test correctly shows that concurrent reads
(second readRequest* before first promise resolves) throw an error, but sequential reads work fine and return cached
data. The phrase in the TS JSDoc comment — "concurrent" — needs to be defined explicitly so users don't assume all
cross-method reads are forbidden.

  1. body_read_data vs request_form caching interaction undocumented

When readRequestText() is called first and the body is cached, then readRequestForm() is called: does the form parser
re-use the cached body bytes or re-trigger ngx_http_read_client_request_body? The state machine's FORM flag suggests
this is handled, but there is no test for this call order, and the TypeScript docs say nothing about it.

  1. Inherit/override behavior for js_access at multiple levels

The directive is registered for NGX_HTTP_MAIN_CONF | NGX_HTTP_SRV_CONF | NGX_HTTP_LOC_CONF. The merge logic presumably
takes the most-specific level. This should be documented, as the behavior is not self-evident. js_content has the
same issue but users are already familiar with it; js_access compounds it because access decisions are
security-sensitive.

Nice-to-have Before Merge (Low Risk, High Value)

  1. Comment the --boundary / --boundary-- prefix search logic

The initial scan in ngx_js_form_parse_multipart uses the same delimiter bytes for both the opening (dlen) and closing
(cdlen) search. That they can point to the same position (empty body case) and are handled correctly is non-obvious.
One comment would prevent future maintainers from "simplifying" the check incorrectly.

  1. ngx_js_form_find could use memmem

On Linux, memmem(3) is available and uses a two-way algorithm. The current naive scan is bounded to O(n × 200) by
NGX_JS_FORM_MAX_BOUNDARY_LEN, which is acceptable but leaves performance on the table for common cases with shorter
boundaries. This is low priority but could be a one-line fix.


After Merge

API Enhancements

  1. Add maxValueSize option to readRequestForm()

maxKeys = 128 bounds the field count but not field size. A single field=<50MB> passes unchecked (the only guard is
client_max_body_size). A maxValueSize option would complete the DoS protection story that lancedockins raised and
maxKeys only partially addressed.

  1. Expose the actual filename string

Return the filename= parameter value alongside (or instead of) just flagging is_file. The natural place is a separate
entries slot in ngx_js_form_entry_t for filename. The JS API could expose it via form.getFileName(fieldName) or a
richer entry object. This is the most commonly requested feature once users start using the API.

  1. filename* (RFC 8187) support

filename*=UTF-8''... for non-ASCII filenames. Not needed for the first iteration but will become a support ticket
quickly for any internationalized deployment.

  1. Per-part Content-Type access

The multipart parser ignores Content-Type headers on individual parts. A future form.getContentType(name) would let JS
distinguish JSON fields, binary blobs, and plain text in the same form.

  1. File content access (opt-in, with size limit)

Right now there is no way to read the actual bytes of a file upload. An option like { readFiles: true, maxFileSize: N
} would unlock the primary use case for multipart in an API gateway (virus scan, content inspection). This is
intentionally out of scope for the first version but should be the next step.

Performance

  1. Boyer-Moore or memmem for the boundary scan

Once the validator from item 8 is in (bounding boundary characters), a proper implementation of memmem or a two-pass
Boyer-Moore would reduce average-case multipart parse time from O(n × boundary_len) to O(n). This matters for large
file uploads being sniffed in the access phase.

Ecosystem / DX

  1. readRequestBody(type, options) consolidation

drsm's suggestion is worth revisiting now that the API is published and users have a chance to react. If there is no
strong objection, consolidating the four methods into one with a type discriminant ("text", "arrayBuffer", "json",
"form") and a shared options bag would reduce the API surface and make size limits consistent across all types. This
should wait for user feedback first.

  1. Non-ASCII field names and values in multipart

URL-encoded bodies percent-decode everything, so UTF-8 field names work. Multipart Content-Disposition field names are
NOT decoded (no percent-encoding exists there; RFC 8187 name*= syntax is for parameters). A user sending a multipart
form with a UTF-8 field name set by JavaScript's FormData gets raw bytes in name. This should be documented and
potentially handled with RFC 8187 name*= parsing in a follow-up.

  1. js_access inside if() blocks

nginx if() creates an implicit sub-location. The command flags allow LOC_CONF, but the if() interaction is notoriously
subtle in nginx. Testing and documenting js_access inside if() blocks (or explicitly noting it is unsupported) would
prevent user confusion.

  Summary Table

  ┌─────────────────────────────────────┬──────────┬──────────────┐
  │                Item                 │ Priority │    Timing    │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Boundary character validation       │ High     │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Filename value stored/documented    │ High     │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Empty URL-encoded names             │ Medium   │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ readRequestForm in js_access test   │ High     │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Concurrent vs sequential read docs  │ Medium   │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ body_read_data + form caching test  │ Medium   │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Multi-level js_access merge docs    │ Medium   │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Boundary search comment             │ Low      │ Before merge │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ maxValueSize option                 │ High     │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Store/expose actual filename        │ High     │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ filename* RFC 8187                  │ Medium   │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Per-part Content-Type access        │ Low      │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ File content access (opt-in)        │ High     │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ memmem / Boyer-Moore                │ Low      │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ API consolidation (readRequestBody) │ Low      │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ Non-ASCII multipart field names     │ Medium   │ After merge  │
  ├─────────────────────────────────────┼──────────┼──────────────┤
  │ if() block behavior                 │ Low      │ After merge  │
  └─────────────────────────────────────┴──────────┴──────────────┘

@VadimZhestikov
Copy link
Copy Markdown
Contributor

@VadimZhestikov

Boundary character validation is missing (security risk)
The code validates boundary length (≤ 200) but not character validity.
RFC 2046 restricts boundary characters to alphanumerics and a specific
set of specials — explicitly excluding \r, \n, and ". The current
code admits these:

// ngx_js_form.c — only this check exists:
if (value.len == 0 || value.len > NGX_JS_FORM_MAX_BOUNDARY_LEN) { error }

A client-supplied boundary containing \r\n-- is self-consistent within
the parser (same bytes used everywhere), but the inner scan loop searches
for \r\n-- as the part separator prefix. A boundary like x\r\n--y
causes the scanner to find false \r\n-- hits inside the boundary token
itself before the ngx_memcmp can reject them. For adversarial inputs
this forces O(n × m) backtracking on every false hit. More seriously it
contradicts RFC 2046 §5.1.1. Fix: reject any boundary byte outside the
RFC bcharsnospace set.

The boundary ABNF in RFC 2046 §5.1.1boundary := 0*69<bchars> bcharsnospace over [0-9A-Za-z'()+_,-./:=?] is normative grammar; a fully conformant SENDER MUST stay inside it. The spec is silent on what a RECEIVER MUST do with a non-conforming boundary.

So the question is not "is the ABNF normative" (it is) but "should a receiver reject syntactically non-conforming boundaries". The specs do not require that.

Also, the \r\n---in-boundary scenario is prevented by nginx's own header parser. In the sw_value state treats CR and LF as end-of-value and rejects NUL with NGX_HTTP_PARSE_INVALID_HEADER. By the time Content-Type reaches ngx_js_form_parse_content_type, these chars are already cleared.

Other non-bcharsnospace bytes that can still arrive (tab, high bytes, RFC-illegal punctuation, embedded " via \" in the quoted form) are compared bytewise with ngx_memcmp in ngx_js_form_find - never used as regex, shell, or log input. The boundary value is also not surfaced back to JS, removing log/response-injection avenues. There is no parser-confusion vector.

The remaining theoretical concern is performance: a body crafted to trigger many false \r\n-- hits, each followed by a long boundary memcmp. Bounded by NGX_JS_FORM_MAX_BOUNDARY_LEN = 200, this is O(n·200) => O(n). Where n is limited by nginx max header size limit.

I plan to keep current behavior. HTTP form uploads rejecting non-standard but harmless boundaries would be compatibility-hostile with little security value: the dangerous bytes are already filtered by nginx's header parser, the rest are inert under bytewise matching, and no peer parser (I looked at a few) enforces the alphabet. The 200-byte cap plus nginx header parsing is the right balance.

Sounds good, so if we removes a4aed9e, we should also drop our L3 commit (4276ee3) since the distinct messages only matter if the validation exists.

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented May 4, 2026

@VadimZhestikov

  1. Empty field names accepted silently in URL-encoded forms
    Input =value produces an entry where name.len == 0. The test suite covers missing name= in multipart
    Content-Disposition, but not bare =value in URL-encoded bodies. This entry is then reachable via form.get(""). Either
    reject it (consistent with the multipart behavior) or document it.

The behavior is intentional, spec-conformant, and already covered by the
test suite. URL-encoded =value with empty name is the prescribed output
of the WHATWG application/x-www-form-urlencoded parser, which is the
behavior URLSearchParams exposes in every browser and in Node.

> let p = new URLSearchParams('a=1&a=2&empty=&=blank&space=one+two');
console.log('get(""):', p.get(''));               // "blank"
console.log('has(""):', p.has(''));               // true

@VadimZhestikov
Copy link
Copy Markdown
Contributor

@VadimZhestikov

  1. Filename value is silently dropped

The filename= parameter is parsed only to set is_file = 1; the string itself is never stored in ngx_js_form_entry_t. So fileFieldNames() returns the field name attribute (e.g., "attachment"), not the actual uploaded filename (e.g., "report.pdf"). A user asking "what files were uploaded and what are they named?" has no way to answer this through the current API. This is either a usability deficiency that needs a design decision before merge, or it must be clearly documented as a known limitation.

The File is a very large scope, not for this patch. Therefore files have limited support. But I agree, that a user needs access to the filename. I redesigned the API, such that it follow Fetch FromData by implementing the minimal subset for the file API. Now, in case of the attached file NginxHTTPRequestFormFile is returned. fileFieldNames() is removed as non-standard as well.

+interface NginxHTTPRequestFormFile {
+    readonly name: string;
+}
+
+type NginxHTTPRequestFormValue = string | NginxHTTPRequestFormFile;
+
+interface NginxHTTPRequestForm {
+    get(name: NjsStringOrBuffer): NginxHTTPRequestFormValue | null;
+    getAll(name: NjsStringOrBuffer): NginxHTTPRequestFormValue[];
+ ..

Redesign good, thanks

@VadimZhestikov
Copy link
Copy Markdown
Contributor

@VadimZhestikov

  1. readRequestForm() is not tested in js_access phase
    js_request_form.t tests form parsing but through js_content. js_access_body.t tests body reading in the access phase
    but only for readRequestText/ArrayBuffer/JSON. The FORM bitflag in the state machine (state & 4) is exercised by a
    different code path than the plain body read. There is no test that calls readRequestForm() from a js_access handler,
    which is the primary advertised use case.

This is incorrect, the first location

location /access_form {
    js_access test.access_form;
    js_content test.content;
}
async function access_form(r) {
   try {
       r.variables.foo = render(await r.readRequestForm({maxKeys: 8}));

       } catch (e) {
           r.variables.foo = `${e.constructor.name}:${e.message}`;
       }
}

And the first test hits that.

Correct, it is mistake in the review. Thanks!

@nginx nginx deleted a comment from xeioex May 4, 2026
@nginx nginx deleted a comment from xeioex May 4, 2026
@nginx nginx deleted a comment from xeioex May 4, 2026
@VadimZhestikov
Copy link
Copy Markdown
Contributor

VadimZhestikov commented May 5, 2026

@VadimZhestikov

  1. "Concurrent reads rejected" semantics need clarification in the TypeScript docs
    The TS file says body reading methods work in js_access and js_content. The test correctly shows that concurrent reads (second readRequest* before first promise resolves) throw an error, but sequential reads work fine and return cached data. The phrase in the TS JSDoc comment — "concurrent" — needs to be defined explicitly so users don't assume all cross-method reads are forbidden.

The word "concurrent" does not appear anywhere in ts/ngx_http_js_module.d.ts (or any other TS declaration).

> grep -in concurrent nginx/*.c ts/*.d.ts
nothing

It is technically correct -- the finding was imprecisely worded.

However, the actual content of our commit 66f1371 is still valid. We never used the word "concurrent" in the TypeScript file either -- what we added was:

The body is buffered after the first call; sequential calls with different methods (e.g. readRequestText() followed by
readRequestArrayBuffer()) return the same buffered data. Calling a second read method while the first Promise is still pending throws an error.

The real actionable issue from this commit: the second change we made -- updating readRequestForm() to say "File parts are detected (getFileName() returns the original filename)" -- is now stale because you removing getFileName() in your API redesign.

@nginx nginx deleted a comment from xeioex May 5, 2026
@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented May 5, 2026

  1. body_read_data vs request_form caching interaction undocumented
    When readRequestText() is called first and the body is cached, then readRequestForm() is called: does the form parser
    re-use the cached body bytes or re-trigger ngx_http_read_client_request_body? The state machine's FORM flag suggests this is handled, but there is no test for this call order, and the TypeScript docs say nothing about it.

I added the comment for readRequestText() for this, also changed other readRequest*() so they can reference the main readRequestText() text. The test is also added.

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented May 5, 2026

@VadimZhestikov

  1. Inherit/override behavior for js_access at multiple levels
    The directive is registered for NGX_HTTP_MAIN_CONF | NGX_HTTP_SRV_CONF | NGX_HTTP_LOC_CONF.

Thanks for catching it, aligned the levels to js_content's "NGX_HTTP_LOC_CONF|NGX_HTTP_LIF_CONF|NGX_HTTP_LMT_CONF". The merging logic itself if standard here.

The directive registers a JavaScript handler in the access phase,
running after built-in access checkers (allow/deny, auth_basic,
auth_request).  r.subrequest(), ngx.fetch() and other async operations
are supported.

The handler defaults to NGX_OK (access granted) on normal completion,
matching the behavior of other access phase modules.  The r.decline()
method allows the handler to return NGX_DECLINED (no opinion), deferring
the decision to other access checkers under "satisfy any".

The r.return() method can send any HTTP response from the access phase,
including 3xx redirects for authentication flows.
@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented May 5, 2026

@VadimZhestikov

  1. Comment the --boundary / --boundary-- prefix search logic
    The initial scan in ngx_js_form_parse_multipart uses the same delimiter bytes for both the opening (dlen) and closing
    (cdlen) search. That they can point to the same position (empty body case) and are handled correctly is non-obvious.
    One comment would prevent future maintainers from "simplifying" the check incorrectly.

Added the commentary.

@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented May 5, 2026

@VadimZhestikov

  1. ngx_js_form_find could use memmem
    On Linux, memmem(3) is available and uses a two-way algorithm. The current naive scan is bounded to O(n × 200) by NGX_JS_FORM_MAX_BOUNDARY_LEN, which is acceptable but leaves performance on the table for common cases with shorter boundaries. This is low priority but could be a one-line fix.

This change requires a measurement to justify. This change is also not a one-liner. As a portable code, we need to discover, non-POSIX memmem().

@xeioex xeioex force-pushed the nginx_body_routing_done branch from d079547 to 3c8a1d5 Compare May 5, 2026 05:50
@VadimZhestikov
Copy link
Copy Markdown
Contributor

VadimZhestikov commented May 5, 2026

Intermediate review summary — 3 issues still open in xeioex/nginx_body_routing_done

Thank you for the API redesign, body-cache JSDoc, boundary comment, and js_access config fixes — all good improvements. Three correctness/security issues from the original review remain
unaddressed in the current nginx_body_routing_done head (3c8a1d5).

We have working patches for all three on branch nginx_body_routing_done_fixes
(VadimZhestikov/njs@nginx_body_routing_done...VadimZhestikov:njs:nginx_body_routing_done_fixes) rebased directly onto your squashed commit — feel free to cherry-pick.


M1 — maxKeys has no upper bound

ngx_http_js_request_form_max_keys (NJS path) and ngx_http_qjs_request_form_max_keys (QJS path) only reject values < 1; there is no upper-bound check. An arbitrarily large maxKeys passes
validation and is forwarded to the parser with no guard.

Suggested fix — add a constant and validate both bounds in both validators:

  // ngx_js_form.h
  #define NGX_JS_FORM_MAX_KEYS_LIMIT  65536

  // NJS path
  if (n < 1 || (ngx_uint_t) n > NGX_JS_FORM_MAX_KEYS_LIMIT) {
      njs_vm_type_error(vm, "\"maxKeys\" must be between 1 and %ud",
                        (ngx_uint_t) NGX_JS_FORM_MAX_KEYS_LIMIT);
      return NJS_ERROR;
  }

  // QJS path
  if (n < 1 || (ngx_uint_t) n > NGX_JS_FORM_MAX_KEYS_LIMIT) {
      JS_ThrowTypeError(cx, "\"maxKeys\" must be between 1 and %d",
                        (int) NGX_JS_FORM_MAX_KEYS_LIMIT);
      return NGX_ERROR;
  }

L1 — filename* (RFC 5987) parts are not detected as file uploads

ngx_js_form_parse_disposition (ngx_js_form.c ~line 570) only tests for filename; a Content-Disposition that uses only filename*=UTF-8''hello.txt (RFC 5987 extended notation) leaves is_file as
0. As a result form.get('field') returns a plain string instead of a NginxHTTPRequestFormFile object, and hasFiles() returns false for such parts.

The fix is a single else if branch — we don't need to decode the encoded value, just set the flag:

  } else if (param.len == sizeof("filename*") - 1
             && ngx_strncasecmp(param.data, (u_char *) "filename*",
                                param.len) == 0)
  {
      /*
       * RFC 5987 extended parameter (filename*=charset'lang'value).
       * We do not decode the encoded value but mark the part as a file
       * upload so that hasFiles() works correctly.
       */
      *is_file = 1;
  }

L2 — NGX_ERROR from allocation failures is silently converted to NGX_JS_FORM_PARSE_ERROR

Five functions call sub-functions that can return NGX_ERROR (pool allocation failure) but check only != NGX_JS_FORM_OK and unconditionally return NGX_JS_FORM_PARSE_ERROR. An OOM condition is
therefore reported to the caller as a parse error, masking the real failure.

Affected call sites (current line numbers in ngx_js_form.c):

  ┌────────────────────────────────┬───────────────┬───────────────────────────────────────────────────────────┐
  │            Function            │     Lines     │                          Callee                           │
  ├────────────────────────────────┼───────────────┼───────────────────────────────────────────────────────────┤
  │ ngx_js_form_parse_content_type │ 183           │ ngx_js_form_parse_param                                   │
  ├────────────────────────────────┼───────────────┼───────────────────────────────────────────────────────────┤
  │ ngx_js_form_parse_urlencoded   │ 256, 266, 273 │ ngx_js_form_decode_urlencoded (×2), ngx_js_form_add_entry │
  ├────────────────────────────────┼───────────────┼───────────────────────────────────────────────────────────┤
  │ ngx_js_form_parse_multipart    │ 359, 399      │ ngx_js_form_parse_part_headers, ngx_js_form_add_entry     │
  ├────────────────────────────────┼───────────────┼───────────────────────────────────────────────────────────┤
  │ ngx_js_form_parse_part_headers │ 496           │ ngx_js_form_parse_disposition                             │
  ├────────────────────────────────┼───────────────┼───────────────────────────────────────────────────────────┤
  │ ngx_js_form_parse_disposition  │ 556           │ ngx_js_form_parse_param                                   │
  └────────────────────────────────┴───────────────┴───────────────────────────────────────────────────────────┘

Pattern to apply at each site:

  rc = ngx_js_form_...(pool, ...);
  if (rc != NGX_JS_FORM_OK) {
      return rc;  // propagates NGX_ERROR unchanged
  }

Each function needs ngx_int_t rc; added to its locals.

@VadimZhestikov
Copy link
Copy Markdown
Contributor

M1 — maxKeys has no upper bound
ngx_http_js_request_form_max_keys (NJS path) and ngx_http_qjs_request_form_max_keys (QJS path) only reject values < 1; there is no upper-bound check. An arbitrarily large maxKeys passes
validation and is forwarded to the parser with no guard.

I don't see a real risk: body size is already bounded by client_max_body_size; the parser advances through the body so max_keys only gates the entry counter, not unbounded allocation. The proposed 65536 limit is arbitrary and the ("could be huge") doesn't translate to an actual attack

Yes, correct. Thanks!

@nginx nginx deleted a comment from xeioex May 5, 2026
@xeioex xeioex force-pushed the nginx_body_routing_done branch from 3c8a1d5 to 7c7a6c3 Compare May 5, 2026 23:01
@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented May 5, 2026

L2 — NGX_ERROR from allocation failures is silently converted to NGX_JS_FORM_PARSE_ERROR

Applied, thanks.

@VadimZhestikov
Copy link
Copy Markdown
Contributor

L1 — filename* (RFC 5987) parts are not detected as file uploads
ngx_js_form_parse_disposition (ngx_js_form.c ~line 570) only tests for filename; a Content-Disposition that uses only filename*=UTF-8''hello.txt (RFC 5987 extended notation) leaves is_file as
0. As a result form.get('field') returns a plain string instead of a NginxHTTPRequestFormFile object, and hasFiles() returns false for such parts.

The spec for our context forbids filename*. RFC 7578 §4.2 (current multipart/form-data definition) explicitly says the RFC 5987 encoding "MUST NOT be used" in multipart bodies. The RFC 6266 use of filename* applies to HTTP Content-Disposition response headers, not to multipart request bodies.

Non-ASCII filenames already work for everyone. Every client that needs them sends raw UTF-8 in filename="日本.txt", and our parser already passes those bytes through unchanged. So the user-visible problem "my Cyrillic filename doesn't work" is not solved by adding filename* support -- it is already solved.

Given the very limited legacy scope, it not worth the effort to support filename* decoding (+ ~500LOC). Also, the filenames are tangetial to this PR.

I added the test for UTF-8 filename though.

Make sense, thanks!

@nginx nginx deleted a comment from xeioex May 5, 2026
@xeioex xeioex force-pushed the nginx_body_routing_done branch 2 times, most recently from 3e410d5 to 19b45db Compare May 5, 2026 23:32
Added async methods
    - r.readRequestText() as string
    - r.readRequestArrayBuffer() as ArrayBuffer
    - r.readRequestJSON() as object.

that return Promises resolving with the request body wrapped
as a corresponding type.
@VadimZhestikov
Copy link
Copy Markdown
Contributor

Found another issue during security review — NJS engine only

readRequestForm({}) (empty options object, perfectly valid JS for "use all defaults") returns HTTP 500 in the NJS engine path but works correctly in QJS.

Root cause — ngx_http_js_request_form_max_keys(), ngx_http_js_module.c:

njs_vm_object_prop() returns NULL in two distinct situations: a real VM exception, and a simply absent property. The validator treats both as an error:

  value = njs_vm_object_prop(vm, options, &max_keys_name, &lvalue);
  if (value == NULL) {
      return NJS_ERROR;   // fires even when {} has no maxKeys key
  }
  if (njs_value_is_undefined(value)) {
      return NJS_OK;      // unreachable when value is NULL
  }

The QJS path is not affected — JS_GetPropertyStr() returns JS_UNDEFINED for absent properties and JS_EXCEPTION only on error, so the existing JS_IsUndefined() check there already handles this
case correctly.

Fix (one line, matching the pattern used elsewhere in this file for optional options):

  value = njs_vm_object_prop(vm, options, &max_keys_name, &lvalue);
  if (value == NULL || njs_value_is_undefined(value)) {
      return NJS_OK;
  }

Test that demonstrates the asymmetry (QJS passes, NJS fails before the fix):

The test and fix are on nginx_body_routing_done_fixes (VadimZhestikov/njs@nginx_body_routing_done...VadimZhestikov:njs:nginx_body_routing_done_fixes) as commit
f6acc94.

The async method parses the client request body as an HTML form
and returns a Promise resolving to a form object with get(),
getAll(), has(), forEach(), hasFiles() accessors.

Supports "application/x-www-form-urlencoded" and "multipart/form-data"
content types.  File parts are detected but their contents are not
exposed.  An optional maxKeys option caps the number of fields.

File parts are detected but their contents are not exposed.  A
proper File API with streaming Blob semantics is a significant amount
of work and is out of scope.
@xeioex xeioex force-pushed the nginx_body_routing_done branch from 19b45db to 0ab0daf Compare May 6, 2026 00:12
@xeioex
Copy link
Copy Markdown
Contributor Author

xeioex commented May 6, 2026

@VadimZhestikov

readRequestForm({}) (empty options object, perfectly valid JS for "use all defaults") returns HTTP 500 in the NJS engine path but works correctly in QJS.

Fixed.

Copy link
Copy Markdown
Contributor

@VadimZhestikov VadimZhestikov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

@xeioex xeioex merged commit 0ffc96d into nginx:master May 6, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants