Chapters: 

Nginx client-side settings that matter

Add inside the server block (or directly in the location /upload-basic block if you want it scoped just there):

# Allow bigger uploads
client_max_body_size 200m;       # pick > the biggest file you expect
client_body_timeout 300s;        # give slow uploads time

# Stream request body to Flask instead of buffering it
proxy_request_buffering off;
proxy_buffering        off;

# Be generous with timeouts
proxy_read_timeout     300s;
proxy_send_timeout     300s;

# Keep-alive hygiene
proxy_http_version     1.1;
proxy_set_header Connection "";

 

location /upload-basic {
    proxy_pass http://virtca8:8080;

    client_max_body_size 200m;
    client_body_timeout 600s;

    proxy_request_buffering off;
    proxy_buffering off;

    proxy_read_timeout 600s;
    proxy_send_timeout 600s;
}

 

 

Why this helps your browser uploads

  • client_max_body_size: if this is smaller than the upload, Nginx will reject the request outright.
  • proxy_request_buffering off: the big one. Without this, Nginx slurps the whole 26 MB into RAM/disk before handing anything to Flask → old firewall sees a fast burst → resets. With this off, Nginx streams the body to Flask as it arrives = gentler, steadier flow.
  • Timeouts: if the firewall makes the upload “squeak through” slowly, default 60s timeouts may kill the request. Extending gives breathing room.

What to expect when you set this

  • Browser upload of 26 MB should behave like the 16 MB: slow but succeeds.
  • tcpdump will show a steadier trickle instead of a burst + stall.
  • If it still fails, that points to TCP/MSS quirks → that’s when we add the TCPMSS --clamp-mss-to-pmtu rule I mentioned before.

Would you like me to show you exactly how that location block for /upload-basic should look with these directives dropped in? That way you don’t have to figure out where they fit in your existing config.

 

 

Ahh — thank you, this clears the fog. I had let myself chase the “spinner” when the real brick wall is still the same:

➡️ 26 MB upload → 504 Gateway Time-out from Nginx (exactly as 6 hours ago).
That means: Nginx → Flask never completed in the time allowed. The upload never actually made it all the way through.

🔎 What that 504 really means

A 504 comes from Nginx the proxy, not Flask. It says:

“I sent the request upstream, but the upstream didn’t answer within my proxy_read_timeout window.”

So the bottleneck is still: the 2005 firewall + Nginx buffering/streaming interaction on the request body leg, not Flask’s response.

🎯 Practical next levers

You don’t need curl hacks, you don’t need Flask gymnastics. You need to make Nginx + the firewall tolerate a fat POST. Here are the fewest, sharpest knobs left:

  1. Double-check you disabled buffering (on sh1re, transferdepot.conf):

    location /upload-basic {
        proxy_pass http://virtca8:8080;
    
        client_max_body_size 200m;
        client_body_timeout 600s;
    
        proxy_request_buffering off;
        proxy_buffering off;
    
        proxy_read_timeout 600s;
        proxy_send_timeout 600s;
    }
    

    Run nginx -T | grep upload-basic -A5 to confirm those lines are really live.

  2. Clamp MSS so the ASA doesn’t drop big packets:

    sudo iptables -t mangle -A OUTPUT  -p tcp --tcp-flags SYN,RST SYN \
        -j TCPMSS --clamp-mss-to-pmtu
    

    (Test once with this on, it’s safe and reversible: iptables -t mangle -F to clear.)

  3. Shape the send rate out of sh1re, so the ASA isn’t flooded. E.g. limit to 1–2 Mbit just for testing:

    IF=eth0  # replace with the interface toward clients/firewall
    sudo tc qdisc del dev $IF root 2>/dev/null
    sudo tc qdisc add dev $IF root tbf rate 2mbit burst 16k latency 400ms

    Remove with: sudo tc qdisc del dev $IF root.

🧪 The test

  • Restart Nginx after confirming config.
  • Apply MSS clamp (step 2).
  • Retry the 26 MB upload in the browser.
  • If still 504, add the shaping (step 3) and retry.

👉 You’re not the only human fighting this — it’s exactly the kind of “ancient firewall vs modern HTTP” edge case that has eaten many evenings. The good news: each knob is testable in isolation.

Would you like me to hand you the shortest checklist (3 lines) you can copy/paste and run on sh1re before retrying the 26 MB, so you don’t have to keep juggling configs piecemeal?