Extract Anything from Any PDF: Inside Foxit’s Advanced Extraction Engine

Foxit PDF Structural Extraction API engine extracting tables, forms, and text from scanned PDFs.

Basic PDF extraction libraries break on scanned documents, complex tables, and form fields, leaving downstream pipelines starved of clean data. Foxit’s PDF Structural Extraction API combines OCR, layout recognition, and AI parsing to return all twelve PDF element types as structured JSON, ready for RAG, BI, and CRM workflows.

Your PDF extraction pipeline passes unit tests against the sample invoices you built it on. Then production arrives and you’re looking at 47% garbled output on the Q4 contract batch because half those documents are scanned TIFFs wrapped in a PDF envelope, and your extraction library has no concept of what an image-only page actually is.

The failure modes are specific. PyMuPDF’s get_text() returns empty strings on scanned PDFs because it reads content streams directly, and image-only pages carry no text stream. pdfplumber’s table detection merges rows when column widths span non-uniform grids, which is standard in any financial statement that mixes summary and line-item rows on the same page. Embedded images containing meaningful text (stamped signatures, engineering drawing annotations, letterhead logos) get silently dropped. The library extracts coordinates for the XObject reference but does nothing with the raster data inside. Form fields built on non-standard annotation types (AcroForms using widget annotations with custom action streams) lose their values entirely when you serialize to text.

The architectural distinction that creates this problem is the difference between content serialization and semantic extraction. A PDF converter reads a content stream and writes out whatever character sequences it finds in rendering order. An extraction engine understands the spatial relationships between those character sequences: that two columns of text at x=72 and x=320 are parallel body copy, that the row at y=210 belongs to the table starting at y=180, that the text block repeating on every page is a header carrying lower retrieval weight in a RAG index. Output that lacks spatial and semantic classification looks correct on screen but breaks every downstream consumer that depends on structure.

BI dashboards require numbers tied to the right row labels. AI ingestion pipelines require heading hierarchy to chunk accurately. CRMs require form field values extracted from AcroForm widget dictionaries, delivered with field names intact. The delta between what basic extraction libraries return and what those systems can actually consume is where document pipeline engineering hours accumulate.

How Foxit’s PDF Structural Extraction Engine Works Under the Hood

Foxit exposes this capability as the PDF Structural Extraction (Trial) endpoint inside the PDF Services API (POST /pdf-services/api/documents/pdf-structural-extract). Trial status means the schema is versioned at v1.0.7 and may evolve, but the contract is stable enough to build against today, and the endpoint runs against the production base URL at developer-api.foxit.com.

The engine runs three coordinated layers. The OCR layer operates on rasterized page content, recognizing characters from image-based PDFs and scanned documents across 200+ languages. The layout recognition layer applies spatial analysis to identify column boundaries, reading order, table cell boundaries, figure regions, and header/footer zones. The AI-based parsing layer classifies extracted objects semantically, resolving ambiguous blocks (a text run that spans two layout columns, or a figure caption that reads syntactically like a section heading) into typed elements.

All three layers run inside Foxit’s core PDF engine, which powers 700 million+ users across 20+ years of production deployments. That engine has native awareness of PDF internal structures: content streams, XObject dictionaries, AcroForm field trees, and annotation layers. The OCR layer operates on the same internal page representation the rendering engine uses, so it handles annotated PDFs where text overlaps image regions, and form fields where the visual display and stored value diverge.

The same Structural Extraction endpoint is also Step 1 of Foxit’s PDF Translation (Trial) workflow, which signals that the extraction output is structured enough to backbone a full rewrite-and-rerender pipeline.

NVIDIA’s July 2025 NeMo Retriever research on PDF extraction showed that specialized OCR-based pipelines outperform general-purpose vision-language models on retrieval recall and throughput for complex elements including tables, charts, and infographics. VLMs produce plausible-looking output on clean documents but degrade on exactly the edge cases (multi-column scans, mixed-content pages, annotated overlays) that a specialized pipeline handles systematically.

The Full Object Map: All 12 Extractable PDF Element Types

The Structural Extraction schema v1.0.7 defines twelve element types in the type enum: titleheadparagraphtableimageheaderFooterformhyperlinkfootnotesidebarannotation, and formula.

The API exposes no per-object filter parameters. The only request body fields are documentId (required) and password (optional, for protected PDFs). The engine extracts the full element graph and returns everything in one asynchronous round-trip. You filter client-side on the returned JSON. The design is correct for the workload because partial extraction would require re-running layout recognition per request, costing more compute than transmitting the full element set in a single ZIP.

The result is a ZIP archive. At minimum it contains StructureInfo.json, whose top-level analyzeResult object holds versionpageselements, and info. Documents that contain figures or tables also produce additional binary files (image renditions and table renditions) alongside the JSON, referenced from individual elements so the JSON payload stays manageable on large documents.

Each element in the document-wide flat elements array carries its own idtypecontentregion (with page and an 8-point boundingBox polygon), and score confidence value. A table element adds its cell grid. A form element adds field data. An image element points to its binary file in the ZIP. Because titlehead, and paragraph elements appear in document reading order in the elements array, they chunk cleanly on semantically correct boundaries, which is what a RAG index needs to return complete, coherent passages.

Each type maps directly to a downstream use case: table feeds financial reporting pipelines, form drives automated CRM data entry, image routes to computer vision workflows or document archives, annotation builds compliance audit trails, and head combined with paragraph elements in reading order feeds RAG ingestion.

API Walkthrough: The Four-Step Async PDF Extraction Flow

There’s no synchronous path. You upload, get a task ID, poll until completion, then download the result ZIP. Every request carries two headers: client_id and client_secret (lowercase snake_case, as specified in the API spec’s security schemes). Both come from the Developer Portal’s default application. Pass them as named HTTP headers on every request and do not use Authorization: Bearer.

The four-step sequence runs as follows:

Four-step PDF structural extraction API flow between client and Foxit PDF Services. 

The four-step sequence diagram uses two headers on every request: client_id and client_secret. Create a free developer account at account.foxit.com/site/sign-up (no credit card required, no sales call). Once you’re in, the credentials live under the default application in the Developer Portal. Copy the Client ID and Client Secret pair and treat them like any other API secret. Pass them as named HTTP headers on every call (lowercase snake_case, not Authorization: Bearer).

  • Step 1: Upload the PDF to POST /pdf-services/api/documents/upload as multipart/form-data with the file under field name file. The 100MB ceiling is enforced with a 413 and error code MAX_UPLOAD_SIZE_EXCEEDED. The response body returns { "documentId": "doc_abc123" }.

  • Step 2: Starts extraction with POST /pdf-services/api/documents/pdf-structural-extract, passing { "documentId": "doc_abc123" }. Add a "password" field for protected PDFs. The response is 202 Accepted with { "taskId": "task_xyz789" }.

  • Step 3: Polls GET /pdf-services/api/tasks/{task-id}. The TaskResponse carries taskIdstatusprogress (0-100 integer), resultDocumentId, and an optional error object. The status enum values are PENDINGIN_PROGRESSCOMPLETED, and FAILED. Portal narrative copy occasionally uses “PROCESSING,” but the schema enum value is IN_PROGRESS. Match your code against the enum. Poll until COMPLETED and capture resultDocumentId.

  • Step 4: Downloads with GET /pdf-services/api/documents/{resultDocumentId}/download, which streams the ZIP archive. The optional filename query parameter overrides the default filename.

The complete cURL sequence for all four steps: 

# Step 1: Upload
curl -X POST "https://na1.fusion.foxit.com/pdf-services/api/documents/upload" \
  -H "client_id: YOUR_CLIENT_ID" \
  -H "client_secret: YOUR_CLIENT_SECRET" \
  -F "file=@invoice_batch.pdf"

# {"documentId":"doc_abc123"}

# Step 2: Start extraction
curl -X POST "https://na1.fusion.foxit.com/pdf-services/api/documents/pdf-structural-extract" \
  -H "client_id: YOUR_CLIENT_ID" \
  -H "client_secret: YOUR_CLIENT_SECRET" \
  -H "Content-Type: application/json" \
  -d '{"documentId":"doc_abc123"}'

# 202 Accepted: {"taskId":"task_xyz789"}

# Step 3: Poll task status
curl "https://na1.fusion.foxit.com/pdf-services/api/tasks/task_xyz789" \
  -H "client_id: YOUR_CLIENT_ID" \
  -H "client_secret: YOUR_CLIENT_SECRET"

# {"taskId":"task_xyz789","status":"COMPLETED","progress":100,"resultDocumentId":"result_def456"}

# Step 4: Download the result ZIP
curl "https://na1.fusion.foxit.com/pdf-services/api/documents/result_def456/download" \
  -H "client_id: YOUR_CLIENT_ID" \
  -H "client_secret: YOUR_CLIENT_SECRET" \
  -o extraction_result.zip

The Python version with a polling loop and ZIP parsing:

import requests, json, time, zipfile
BASE_URL = "https://na1.fusion.foxit.com/pdf-services/api"
HEADERS  = {"client_id": "YOUR_CLIENT_ID", "client_secret": "YOUR_CLIENT_SECRET"}

# Step 1: Upload
with open("invoice_batch.pdf", "rb") as f:
    doc_id = requests.post(
        f"{BASE_URL}/documents/upload", headers=HEADERS, files={"file": f}
    ).json()["documentId"]

# Step 2: Start extraction
task_id = requests.post(
    f"{BASE_URL}/documents/pdf-structural-extract",
    headers={**HEADERS, "Content-Type": "application/json"},
    json={"documentId": doc_id},
).json()["taskId"]

# Step 3: Poll until COMPLETED or FAILED
while True:
    task = requests.get(f"{BASE_URL}/tasks/{task_id}", headers=HEADERS).json()
    if task["status"] == "COMPLETED":
        result_doc_id = task["resultDocumentId"]
        break
    if task["status"] == "FAILED":
        raise RuntimeError(f"Extraction failed: {task.get('error')}")
    time.sleep(2)

# Step 4: Download the result ZIP and save it locally for inspection,
# then parse StructureInfo.json from the saved file
response = requests.get(
    f"{BASE_URL}/documents/{result_doc_id}/download", headers=HEADERS
)
with open("advanced-extraction-result.zip", "wb") as f:
    f.write(response.content)

with zipfile.ZipFile("advanced-extraction-result.zip") as zf:
    json_name = next(n for n in zf.namelist() if n.endswith("StructureInfo.json"))
    result = json.loads(zf.read(json_name))["analyzeResult"]

print(f"Schema: {result['version']['schema']}, Elements: {len(result['elements'])}")

On a clean run you should see output like Schema: 1.0.7, Elements: 9 for a small invoice batch. You’ll also find a fresh advanced-extraction-result.zip next to your script. That ZIP holds the full API response, including StructureInfo.json and any rendered image or table binaries, so you can inspect everything the engine returned and not just the parsed JSON.

First, set up and activate a Python virtual environment in your project folder. The official venv guide covers the exact commands for macOS, Linux, and Windows.

Once the virtualenv is active, the sample only needs one third-party package. Drop this into a requirements.txt next to your script and install it with pip install -r requirements.txt:

requests>=2.31.0

If you’re on macOS, use Homebrew Python (brew install python) rather than the system Python from the Xcode command-line tools. The Xcode build is linked against LibreSSL, which is enough to make a correct sample fail.
The ZIP contains a StructureInfo.json file whose top-level object wraps everything under analyzeResult. Inside that wrapper you get a version object, a pages array, a flat elements array, and an info block with analysis metadata. Each element carries its own idtypecontentregion (with page and an 8-point boundingBox polygon [x1,y1,x2,y2,x3,y3,x4,y4]), and a score confidence value:

{
  "analyzeResult": {
    "version": {
      "schema": "1.0.7",
      "software": "FoxitPDFAnalyzer",
      "model": "idp-analysis"
    },
    "pages": [
      {
        "pageNumber": 1,
        "size": { "width": 612, "height": 792, "unit": "point" },
        "state": "success"
      }
    ],
    "elements": [
      {
        "id": "title1",
        "type": "title",
        "content": {
          "text": "Q3 Revenue Summary",
          "style": {
            "fontName": "Helvetica",
            "fontSize": 24.0,
            "fontWeight": 0,
            "fontItalic": false
          }
        },
        "region": {
          "page": 1,
          "boundingBox": [72, 47, 317, 47, 317, 80, 72, 80]
        },
        "score": 0.76
      }
    ],
    "info": {
      "basicInfo": {
        "softwareVersion": "1.6.0",
        "analyzedPageCount": 1,
        "elementCounts": { "title": 1 }
      },
      "extendedMetadata": {
        "pageCount": 1,
        "isEncrypted": false,
        "hasAcroform": false,
        "language": "en"
      }
    }
  }
}

Elements of type tableimage, and form carry additional type-specific payload on top of this base shape, and any rendered image or table binary lands as a sibling file inside the ZIP referenced from the element.

HTTP errors return a standard error envelope:

{ "code": "VALIDATION_ERROR", "message": "documentId is required" }

The documented error codes include VALIDATION_ERROR (400), MAX_UPLOAD_SIZE_EXCEEDED (413), DOCUMENT_NOT_FOUND (404), STORAGE_ERROR, and INTERNAL_SERVER_ERROR (500).

Password-protected PDFs that arrive with no password parameter reach the processing stage before failing. That failure surfaces in the task status poll response after status reaches FAILED, so your error handler must inspect the task response body in addition to the HTTP status codes from the initial POST calls:

{
  "taskId": "task_xyz789",
  "status": "FAILED",
  "progress": 0,
  "error": {
    "code": "INTERNAL_SERVER_ERROR",
    "message": "Document is password-protected"
  }
}

Wiring Extracted PDF Data Into Your Workflow

Pattern 1: AI/RAG pipeline. Filter the flat elements array to titlehead, and paragraph types. Chunk by heading hierarchy, iterating over the array in the order the engine returned it (document reading order is preserved across columns and pages). Embed each chunk and index in Pineconepgvector, or your vector store of choice. Correct reading order, as provided by the extraction engine, is the prerequisite for accurate RAG retrieval on multi-column and paginated documents. When chunks split mid-thought because a layout detector merged two columns, retrieval recall drops and answer quality follows.

Pattern 2: BI reporting. Filter elements by type == "table" client-side, then convert each table’s cell structure into a pandas DataFrame:

import pandas as pd

# `result` is the `analyzeResult` object loaded from StructureInfo.json
tables = [e for e in result["elements"] if e["type"] == "table"]

for i, tbl in enumerate(tables):
    # Cells live at content.body.cells[]. Each cell carries rowIndex,
    # columnIndex, and a nested paragraph whose content.text holds the value.
    body = tbl["content"]["body"]
    grid = [["" for _ in range(body["columnCount"])] for _ in range(body["rowCount"])]
    for cell in body.get("cells", []):
        text = cell.get("paragraph", {}).get("content", {}).get("text", "")
        grid[cell["rowIndex"]][cell["columnIndex"]] = text
    df = pd.DataFrame(grid[1:], columns=grid[0])  # first row as header
    print(f"Table {i}: {df.shape[0]} rows x {df.shape[1]} cols")
    # df.to_gbq("finance.q3_revenue", project_id="your-project")  # BigQuery
    # df.to_sql("q3_revenue", engine)                             # Postgres / Snowflake

The row and column indices from the extraction schema map directly to DataFrame positions, so you get a correctly-structured table with zero manual parsing.

Pattern 3: n8n automation. The four-step flow maps to a chain of HTTP Request nodes in n8n. The first node uploads to POST .../upload and passes documentId through the item. The second sends POST .../pdf-structural-extract and captures taskId. A Loop Over Items construct with an HTTP Request node calling GET .../tasks/{taskId} on a two-second interval checks status until COMPLETED, then routes to the download node. The final HTTP Request node calls GET .../documents/{resultDocumentId}/download, and a Code node using n8n’s binary data helpers unpacks the ZIP and parses the JSON for routing to a Salesforce, HubSpot, Postgres, or Airtable node. The polling requirement makes this a multi-node workflow, but you write zero custom glue code and gain n8n’s built-in error routing and retry handling.

PDF Extraction Tools Compared: Foxit vs. Adobe, Google, Amazon, and Azure

ToolUnderlying ApproachEcosystem Lock-inHandles Scanned PDFsPricing ModelSetup OverheadStatus
Foxit Structural ExtractionProprietary OCR + layout recognition + AI (integrated core engine)Cloud-agnostic REST APIYes (dedicated OCR layer)Subscription, no per-page creditsLow (2 credential headers, 4 REST calls)Trial (schema v1.0.7)
Adobe PDF Extract APIAdobe Sensei ML, reading order + renditionsAdobe Document ServicesYesContact salesMedium (Adobe SDK + ecosystem)GA
Google Document AICloud ML + generative AI, Document Object ModelGoogle Cloud requiredYesPer-page pay-as-you-goMedium-high (GCP + IAM)GA
Amazon TextractDeep learning OCR, key-value and table extractionAWS-nativePartial (strong on forms, weaker on complex layouts)Per-page pay-as-you-goMedium (AWS + IAM)GA
Azure Document IntelligencePrebuilt + custom ML modelsAzure ecosystemYes (prebuilt models)Per-page + model training costsHigh for custom modelsGA

Google Document AI and Azure Document Intelligence win on ecosystem integration if you’re all-in on those clouds. Adobe wins on PDF structural fidelity for workflows already inside the Adobe Document Services ecosystem. Amazon Textract excels on standardized form documents where its pre-trained schema fits the input. These are real advantages, and the comparison is honest only when those contexts are acknowledged.

Foxit’s case is strongest when you need a cloud-agnostic REST API with zero ecosystem dependency, full object coverage across all twelve element types, and enterprise throughput (10 to 10,000+ PDFs/day) with SOC 2, GDPR, and HIPAA compliance built in. The Structural Extraction status is a real trade-off to factor in. The schema at v1.0.7 is callable and stable enough for pipeline integration today, but GA competitors carry a finalized contract. Pin your parser to the version field in the response and you’re insulated from schema evolution.

Your First PDF Extraction API Call, Right Now

Go to developer-api.foxit.com, create a free developer account (no credit card required), and copy your Client ID and Client Secret from the default application. Use the built-in API Playground or import the Postman collection from the Developer Portal to run the four-step sequence: upload a real document (an invoice, a multi-page contract, or a scanned form), call pdf-structural-extract with the returned documentId, poll tasks/{taskId} until COMPLETED, then download via documents/{resultDocumentId}/download.

Unzip the result, open StructureInfo.json, and check three things: analyzeResult.version.schema should report 1.0.7analyzeResult.elements[] should contain at least one table element and one form element if your source document includes those, and the ZIP root should contain the corresponding binary files for any image-type elements. That verification confirms the full extraction pipeline is wired correctly end-to-end.

The same endpoint pattern scales to enterprise volumes. Increase upload and poll concurrency horizontally and the architecture stays identical, with no schema changes, no infrastructure modifications, and no per-page credit consumption to track.

The engineering gap between what basic extraction libraries return and what downstream systems actually consume is where document pipeline hours accumulate. Structural Extraction closes that gap at the API layer, so the complexity stays in the engine and out of your codebase. Get started at developer-api.foxit.com.

PDF Structural Extraction FAQ

PDF structural extraction is the process of identifying and classifying the semantic elements inside a PDF, such as titles, paragraphs, tables, forms, images, and annotations, rather than just pulling raw text. Foxit’s PDF Structural Extraction API returns twelve distinct element types as structured JSON, preserving spatial relationships, reading order, and table cell grids so downstream systems like RAG pipelines, BI dashboards, and CRMs can consume the data without manual parsing.

Yes. Foxit’s PDF Structural Extraction engine includes a dedicated OCR layer that recognizes characters from image-based and scanned PDFs across 200+ languages. The OCR runs on the same internal page representation as the rendering engine, so it handles edge cases like text overlapping image regions, stamped signatures, and engineering drawing annotations that basic libraries like PyMuPDF silently drop.

Foxit’s API is cloud-agnostic with no ecosystem lock-in, requiring just two credential headers and four REST calls. Adobe PDF Extract requires the Adobe Document Services ecosystem, Google Document AI requires GCP and IAM setup, and Amazon Textract requires AWS infrastructure. Foxit also uses subscription-based pricing without per-page credits, while Google, AWS, and Azure all charge per page.

The API identifies twelve element types: title, head, paragraph, table, image, headerFooter, form, hyperlink, footnote, sidebar, annotation, and formula. Each element returns with its content, an 8-point bounding box polygon, page location, and a confidence score. Tables include full cell grids with row and column indices, forms include field data, and images are extracted as separate binary files inside the result ZIP.

The API uses a four-step asynchronous flow: upload the PDF via POST /documents/upload to get a documentId, start extraction with POST /documents/pdf-structural-extract, poll GET /tasks/{taskId} every two seconds until status is COMPLETED, then download the result ZIP via GET /documents/{resultDocumentId}/download. Authentication uses two headers, client_id and client_secret, available from the default application in the Foxit Developer Portal.

The endpoint is currently in Trial status with schema version v1.0.7, meaning the contract is stable but may evolve. It runs on the production base URL at developer-api.foxit.com and is built on Foxit’s core PDF engine, which powers 700 million+ users across 20+ years of deployments. For production pipelines, pin your parser to the version field in the response to insulate against future schema changes.

Foxit MCP Server: Give AI Agents Direct Access to 30+ PDF Tools via Model Context Protocol

Foxit PDF API MCP Server architecture connecting AI agents to 30+ PDF tools, eSign, and DocGen workflows via Model Context Protocol.

Learn how the Foxit MCP Server lets AI agents handle PDF conversion, OCR, merge, signing, and document workflows.

Building a document automation agent with raw REST calls means writing the same boilerplate every time: upload a file, poll for task completion, download the result, handle errors, and manage auth tokens across multiple endpoints. For PDF operations, that loop repeats for every conversion, OCR call, or merge operation in your pipeline. The Foxit PDF API MCP Server collapses those loops into 30+ directly callable tools, with the MCP Server handling upstream REST complexity internally.

This guide covers how the server registers, what it exposes, how Foxit’s eSign and DocGen REST APIs extend the same agent session into signing and document generation workflows, and a concrete four-step workflow you can replicate against your own documents.

MCP Architecture in 90 Seconds

The MCP specification defines three roles. The Host is the LLM runtime (Claude Desktop, VS Code with GitHub Copilot, or Cursor) that manages the conversation and decides when to call tools. The Server is the capability provider, a process that advertises tools over the MCP protocol and executes them against some underlying service. Tools are the individual callable operations each server exposes, defined by a JSON schema the host uses to understand inputs and outputs.

Foxit occupies both sides of this architecture. Foxit PDF Editor ships as an MCP Host, the first PDF application to do so, connecting outward to external MCP servers like Gmail or Salesforce so its AI assistant can reach those services. The Foxit PDF API MCP Server works in the other direction, exposing Foxit’s cloud PDF Services API as 30+ tools for any MCP Host to call.

The MCP Server exposes PDF Services operations: conversion between formats, content extraction, OCR, merge, split, compress, flatten, linearize, compare, watermark, form data import/export, security, and property inspection. Foxit’s eSign API and DocGen API are separate REST services that a single agent session can also reach. The MCP tools handle PDF processing, while direct HTTP calls to eSign and DocGen handle signing and template generation.

Diagram showing an MCP host connecting to the Foxit MCP Server, Foxit PDF Services API, Foxit eSign REST API, and Foxit DocGen REST API for AI-driven document workflows. 

Prerequisites and Configuration

You need three things before registering the server:

Clone the repo from github.com/foxitsoftware/foxit-pdf-api-mcp-server, then register it in your host’s MCP config. For VS Code with GitHub Copilot, add the following to .vscode/mcp.json:

{
  "servers": {
    "foxit-pdf": {
      "command": "uv",
      "args": [
        "--directory",
        "/path/to/foxit-pdf-api-mcp-server",
        "run",
        "foxit-pdf-api-mcp-server"
      ],
      "env": {
        "FOXIT_CLOUD_API_HOST": "https://na1.fusion.foxit.com/pdf-services",
        "FOXIT_CLOUD_API_CLIENT_ID": "your_client_id",
        "FOXIT_CLOUD_API_CLIENT_SECRET": "your_client_secret"
      }
    }
  }
}

For Claude Desktop, the same three environment variables go into the env block of your claude_desktop_config.json under the mcpServers key, with command and args matching the structure above.

Set FOXIT_CLOUD_API_CLIENT_ID and FOXIT_CLOUD_API_CLIENT_SECRET as environment variables on your system before the host process launches. Passing credentials through prompt context is a security risk your production setup should address. The client_id and client_secret from your developer portal authenticate all MCP tool calls to the PDF Services API. Adding eSign to the same agent session requires its own OAuth2 token exchange (covered in the next section), keeping the two credential scopes isolated.

Restart your MCP host after saving the config. The server advertises all tools to the host on connection, so your agent can inspect available operations before invoking any.

PDF Services MCP Tools: Full Catalog

The 30+ tools organize into seven functional categories. Most tools expect a documentId returned by a prior upload_document call, and return a resultDocumentId you pass to download_document when you want the output locally. The exception is pdf_from_url, which accepts a URL directly.

Document Lifecycle

  • upload_document: upload a PDF, Office file, image, HTML file, or plain text file; returns a documentId for subsequent operations
  • download_document: retrieve a processed result to a local file path
  • delete_document: clean up stored files from cloud storage

PDF Creation (file to PDF)

  • pdf_from_wordpdf_from_excelpdf_from_ppt: convert Office documents to PDF
  • pdf_from_textpdf_from_imagepdf_from_html: convert plaintext, image files, or HTML to PDF
  • pdf_from_url: fetch a live URL and convert the rendered page to PDF

PDF Conversion (PDF to file)

  • pdf_to_wordpdf_to_excelpdf_to_ppt: extract editable Office formats from a PDF
  • pdf_to_textpdf_to_htmlpdf_to_image: export text, HTML, or image representations

Manipulation

  • pdf_merge: combine multiple PDFs into one
  • pdf_split: split by page ranges, page count, or every page individually
  • pdf_extract: pull a subset of pages from a PDF
  • pdf_compress: reduce file size by 30-70% depending on content type
  • pdf_flatten: convert form fields and annotations to static content (required for compliance archiving workflows)
  • pdf_linearize: optimize for Fast Web View so browsers can stream PDF pages incrementally
  • pdf_watermark: apply text or image watermarks with configurable position, opacity, and rotation
  • pdf_manipulate: rotate, delete, or reorder pages

Analysis

  • pdf_compare: diff two PDFs and return a color-coded annotation document showing changes
  • pdf_ocr: convert scanned or image-based PDFs to searchable text with multi-language support
  • pdf_structural_analysis: extract layouts, tables, images, form fields, metadata, and text as structured JSON

Security and Forms

  • pdf_protect: add password protection with 128-bit or 256-bit AES encryption and granular permission flags
  • pdf_remove_password: strip password protection from a document
  • export_pdf_form_data: extract form field values as JSON
  • import_pdf_form_data: populate form fields from a JSON payload

Properties

  • get_pdf_properties: return page count, page dimensions, PDF version, encryption status, digital signature info, embedded files, font inventory, and document metadata

The most-used operation in production document pipelines is pdf_from_word. Your agent uploads a DOCX file, gets back a documentId, then calls pdf_from_word with that ID. The underlying PDF Services API runs the conversion asynchronously, but the MCP Server handles polling internally and delivers the final result directly to your agent.

MCP tool call:

{
  "name": "pdf_from_word",
  "input": {
    "documentId": "doc_abc123"
  }
}
MCP tool response:
{
  "success": true,
  "taskId": "task_xyz789",
  "resultDocumentId": "doc_result456",
  "message": "Word document converted to PDF successfully. Download using documentId: doc_result456"
}

Pass doc_result456 to download_document to write the output PDF to disk, or feed it directly into another tool call like pdf_structural_analysis or pdf_compress as the next step in a chain.

Extending to eSign: Foxit’s Signing API as a Complementary REST Layer

After PDF processing via MCP tools, your agent can dispatch a document for signature by calling Foxit’s eSign REST API directly. The eSign API lives at https://na1.foxitesign.foxit.com with regional variants for EU (eu1.foxitesign.foxit.com), Canada (na2.foxitesign.foxit.com), and Australia (au1.foxitesign.foxit.com). These are direct HTTP calls from your agent to the eSign endpoints, coordinated alongside MCP tool calls in the same session.

Authentication uses OAuth2 client_credentials. The eSign token exchange is a distinct flow from the PDF Services header auth that backs your MCP tools:

import requests

resp = requests.post(
    "https://na1.foxitesign.foxit.com/api/oauth2/access_token",
    data={
        "client_id": ESIGN_CLIENT_ID,
        "client_secret": ESIGN_CLIENT_SECRET,
        "grant_type": "client_credentials",
        "scope": "read-write"
    }
)
access_token = resp.json()["access_token"]

The Foxit eSign API developer guide uses “folder” terminology throughout. The key endpoints in an automated signing flow are:

  • POST /folders/createfolder: create a signing folder from one or more PDF documents, define signers, subject, and message
  • POST /folders/sendDraftFolder: dispatch the folder to signers
  • POST /templates/createtemplate: instantiate a folder from a saved template with pre-placed signature fields
  • GET /folders/getFolderHistory: retrieve the full activity audit trail for a folder
  • Webhook channels for status callbacks: register a callback URL to receive real-time events when signers view, sign, or decline

createfolder call takes the PDF output from your MCP pipeline, uploaded to eSign’s document storage after download_document retrieves it, and sets up the signing workflow:

POST /api/folders/createfolder
Authorization: Bearer {access_token}
Content-Type: application/json
{
  "folderName": "Acme Corp Contract - Q3 2025",
  "sendNow": false,
  "fileUrls": [
    "https://your-storage.example.com/acme_contract_final.pdf"
  ],
  "fileNames": [
    "acme_contract_final.pdf"
  ],
  "parties": [
    {
      "firstName": "John",
      "lastName": "Smith",
      "emailId": "[email protected]",
      "permission": "FILL_FIELDS_AND_SIGN",
      "sequence": 1
    }
  ]
}

Set sendNow to false to create a draft folder, then dispatch it with a separate call to /folders/sendDraftFolder. Alternatively, set sendNow to true to create and send in a single call. For files not accessible via URL, use base64FileString instead of fileUrls.

Foxit’s eSign API ships with HIPAA, eIDAS, ESIGN Act, UETA, 21 CFR Part 11, FERPA, and FINRA compliance built in. Audit trail records carry signer location, IP address, recipient identity, event timestamp, consent confirmation, security level, and complete folder history. For legal defensibility in regulated industries, capture and store these fields in your own data layer, because relying solely on Foxit’s folder history API for compliance record-keeping introduces a single point of failure in your audit chain.

End-to-End Workflow: AI Agent Automates a Sales Contract

Your sales ops agent receives a natural language instruction: “Generate a contract for Acme Corp, $48,000 ARR, send to [email protected] for signature.” The agent handles every step autonomously. Each call is labeled as either an MCP tool invocation or a direct REST call.

Sequence diagram showing an AI agent using the Foxit MCP Server, DocGen REST API, and eSign REST API to automate PDF conversion, document generation, and digital signing workflows.

Step 1 uses MCP tool calls. The agent calls upload_document with the DOCX contract template, receives documentId: "doc_abc", then calls pdf_from_word. The MCP Server handles the async conversion and returns resultDocumentId: "doc_pdf" once it completes.

Step 2 uses an MCP tool call. The agent calls pdf_structural_analysis with documentId: "doc_pdf". The tool extracts party names, deal terms, and table data as JSON. The agent validates that required fields are present before proceeding.

Step 3 is a direct REST call to the DocGen API. The agent posts to /document-generation/api/GenerateDocumentBase64 with the validated field values merged into the contract template via {{dynamic_tags}} syntax. DocGen returns the finalized PDF with Acme Corp’s name, the $48,000 ARR figure, and correct dates populated.

Step 4 uses direct REST calls to the eSign API. The agent authenticates via OAuth2, uploads the DocGen output to eSign’s document storage, creates a signing folder via /folders/createfolder with [email protected] as the signer, and dispatches it via /folders/sendDraftFolder.

The LLM selects MCP tools for PDF processing and direct HTTP calls for eSign and DocGen because your system prompt specifies the endpoint contract for each step. The agent chains outputs across both call types, with coordination logic living in the prompt rather than in custom orchestration code you maintain separately.

Production Considerations: Error Handling, Rate Limits, and Data Governance

When you call PDF Services through the MCP Server, async polling happens inside the server process. Your agent receives a final resultDocumentId only after the task completes. When you call the raw PDF Services REST API directly, every operation returns a taskId you poll manually. The pattern below applies exponential backoff with a ceiling of 10 seconds per interval and a 30-second total timeout:

import time, requests

API_HOST = "https://na1.fusion.foxit.com/pdf-services"
auth_headers = {
    "client_id": "your_client_id",
    "client_secret": "your_client_secret"
}

def poll_task(task_id: str, max_wait: int = 30) -> str:
    delay = 1
    elapsed = 0
    while elapsed < max_wait:
        resp = requests.get(
            f"{API_HOST}/api/tasks/{task_id}",
            headers=auth_headers
        )
        data = resp.json()
        if data["status"] == "COMPLETED":
            return data["resultDocumentId"]
        time.sleep(delay)
        elapsed += delay
        delay = min(delay * 2, 10)
    raise TimeoutError(f"Task {task_id} timed out after {max_wait}s")

The free developer plan at developer-api.foxit.com covers development and testing volumes. Production workloads above the free-tier threshold require a volume plan requested through the Developer Portal.

For data governance, all API traffic runs over TLS 1.2+, and documents at rest use AES-256 encryption. Foxit’s API security documentation covers SOC 2 Type II audit status, HIPAA BAA support, GDPR, CCPA, eIDAS, ESIGN Act, UETA, 21 CFR Part 11, FERPA, and FINRA requirements. Customer data runs in logically segmented environments. For healthcare, legal, or financial services pipelines, confirm your data residency requirements before connecting production document flows, because the eu1na2, and au1 regional eSign endpoints determine where data is processed.

PDF API MCP Server FAQs

The Foxit PDF API MCP Server is an open-source Model Context Protocol server that exposes Foxit’s cloud PDF Services API as 30+ callable tools. Any MCP-compatible AI agent host, including Claude Desktop, VS Code with GitHub Copilot, and Cursor, can invoke these tools directly.

The server supports conversion (Word, Excel, PowerPoint, image, HTML, and URL to PDF and back), OCR, merge, split, extract, compress, flatten, linearize, watermark, compare, form data import/export, password protection, and full document property inspection across seven functional tool categories.

PDF Services tools authenticate via a client_id and client_secret set as environment variables before the MCP host launches. The eSign API uses a separate OAuth2 client_credentials token exchange against https://na1.foxitesign.foxit.com/api/oauth2/access_token. The two credential scopes are isolated by design.

Yes. The server registers using a standard mcp.json config block for VS Code with GitHub Copilot or a claude_desktop_config.json block for Claude Desktop. The same config structure works for Cursor. All three hosts discover the server’s tools automatically on connection.

The Foxit developer account is free with no credit card required and covers development and testing volumes. Production workloads above the free-tier threshold require a volume plan through the Developer Portal.

Run Your First Tool Call Now

Getting a working MCP tool call takes under 15 minutes:

  1. Create a free developer account at developer-api.foxit.com (no credit card, instant access). Copy your client_id and client_secret from the dashboard.

  2. Set the three environment variables:

export FOXIT_CLOUD_API_HOST="https://na1.fusion.foxit.com/pdf-services"
export FOXIT_CLOUD_API_CLIENT_ID="your_client_id"
export FOXIT_CLOUD_API_CLIENT_SECRET="your_client_secret"
  1. Clone the repo, register it using the config block from the Prerequisites section, restart your MCP host, and invoke pdf_from_url with any public URL. You’ll have a confirmed PDF output in your working directory. The Developer Portal also includes a live API Playground for validating request payloads against the PDF Services API before wiring them into an agent.

For a full signing workflow, the minimum viable addition to the MCP setup is authenticating against the eSign OAuth2 endpoint and posting to /folders/createfolder with a static PDF. DocGen field population, pdf_structural_analysis validation, and webhook callbacks extend the same pattern incrementally from there.

Get your free API access at developer-api.foxit.com.

Automate Dynamic PDF Generation with the Foxit DocGen API: Word Templates, JSON Data, and Real API Calls

Foxit DocGen API workflow showing a Word template with data tags being converted into a PDF document using JSON data.

Skip the HTML-to-PDF headaches. Use Foxit’s DocGen API to turn Word templates and JSON data into clean, formatted PDFs with one API call.

If you’ve tried to generate a contract or invoice from HTML, you’ve probably burned hours on page-break-inside: avoid declarations that Chrome renders one way and a headless browser renders another. Headers and footers require separate print-media queries, and by the time you’ve got a repeating table header working correctly across pages, you’ve invested a full day of engineering into CSS that exists solely to trick a browser into behaving like a printer.

HTML documents reflow content into a viewport while PDF documents have fixed page geometry. Forcing one model into the other produces predictable failure modes: footnotes that collide with page footers, tables that split at the worst possible row, custom fonts that substitute silently, and signature blocks that drift off-page on longer documents.

There’s a larger practical cost too. For most teams, the authoritative source for enterprise document templates is already a Word file. Your legal team owns the NDA in .docx format. Finance owns the invoice in .docx format. Every structural change flows through Word because that’s where the tracked changes, formatting history, and review process live. Maintaining a parallel HTML version of each template doubles your maintenance surface from day one.

Foxit’s DocGen API eliminates that parallel entirely. You keep your templates as .docx files, embed data tags directly in Word, POST the base64-encoded template and a JSON payload to a single REST endpoint, and receive the rendered PDF (or DOCX) in the response body. You eliminate the browser rendering engine, the print-media CSS layer, and the overhead of a second template format.

How the Foxit DocGen API Works

The core model is a single synchronous POST to the GenerateDocumentBase64 endpoint at developer-api.foxit.com. Your request body carries three fields:

  • base64FileString: your .docx template, base64-encoded
  • documentValues: a JSON object containing your merge data
  • outputFormat: either "pdf" or "docx"

The API processes the template, resolves every tag against your data, and returns a JSON response containing base64FileString (the rendered document) and a message field confirming success or describing a failure. The exchange is fully synchronous, so you receive the finished document in the same HTTP response with no job ID to poll and no webhook to configure.

Authentication uses two HTTP headers: client_id and client_secret. Both come from the Foxit Developer Portal when you create an account. The free Developer plan provides 500 credits per year with no credit card required, and each GenerateDocumentBase64 call consumes exactly one credit. The Startup plan ($1,750/year) provides 3,500 credits. The Business plan ($4,500/year) covers 150,000 credits for production workloads. For context, Nutrient’s API starts at $75 for 1,000 credits, and Apryse requires a sales conversation before you can access pricing at all.

The complete call flow runs from template file to PDF on disk.

Sequence diagram showing the Foxit DocGen API workflow from reading a Word template and encoding it to base64, sending the POST request, and receiving the rendered PDF response.

You can explore every endpoint in the live API playground at developer-api.foxit.com, and the portal includes a Postman collection you can import to run authenticated requests without writing a line of code first.

Build a Word Template with DocGen Tags

Open any .docx file in Microsoft Word and type your tags as plain text directly in the document. The DocGen API uses double-brace syntax: {{field_name}}. Tags go anywhere Word accepts text: headings, body paragraphs, table cells, headers, footers, or text boxes.

Scalar field tags resolve directly to the matching key from your documentValues JSON. A document header with {{customer_name}}{{invoice_number}}, and {{invoice_date}} pulls those three values straight from the top-level keys of your payload.

For arrays, you wrap a single table row (the data row, not the header row) with {{TableStart:array_name}} and {{TableEnd:array_name}} markers. The wrapped row acts as a template row, and the API renders one output row per item in the JSON array. An invoice line-items table in Word looks like this:

DescriptionQtyUnit PriceTotal
{{TableStart:line_items}}{{description}}{{qty}}{{unit_price}}{{total}}{{TableEnd:line_items}}

Within the array row, ROW_NUMBER auto-increments with each rendered row. A SUM(ABOVE) field placed in the row directly below the {{TableEnd:line_items}} marker calculates a column total across all rendered data rows.

For nested JSON objects, use dot-notation in your tags. A shipping address block references {{shipping.street}}{{shipping.city}}, and {{shipping.postal_code}}, mapping to properties nested inside a shipping object in your payload. The nesting can go multiple levels deep, so {{customer.address.city}} resolves against documentValues.customer.address.city.

For a working starting point, grab the downloadable invoice template from the foxit-demo-templates repo. The file is well under the 4 MB upload limit and demonstrates every pattern this article uses: scalar tags, {{TableStart:line_items}} / {{TableEnd:line_items}} with {{ROW_NUMBER}}, currency and date format switches, and subtotal / tax / total fields below the line-items table.

One sizing constraint applies while you build your own template. DocGen rejects uploads larger than 4 MB, so if you embed product photos, scanned letterhead, or full font subsets, compress the images before saving, drop embedded fonts where you can rely on system fonts, or split a large template into smaller per-section templates that you generate and merge separately.

Make Your First API Call: Generate a PDF from JSON

Run a quick pre-flight check before the first call to catch the issues that derail most clean-account run-throughs:

  • Account created and client_id / client_secret copied from the Developer Portal API Keys section
  • Sample template saved locally as invoice_template.docx in the directory you’ll run the script from
  • Template file size confirmed under 4 MB (ls -lh invoice_template.docx on macOS or Linux, right-click → Properties on Windows)

With those in place, confirm your credentials work with a cURL call. The Foxit Developer Portal includes a Postman collection for this, but a quick cURL request against the API catches auth issues before any code runs:

curl -X POST "https://na1.fusion.foxit.com/document-generation/api/GenerateDocumentBase64" \
  -H "client_id: YOUR_CLIENT_ID" \
  -H "client_secret: YOUR_CLIENT_SECRET" \
  -H "Content-Type: application/json" \
  -d '{"base64FileString":"","documentValues":{},"outputFormat":"pdf"}'

A 401 here means invalid credentials. A 400 with a message about the template confirms your headers are accepted and you can proceed to the full call.

Save your .docx template as invoice_template.docx in the same directory as this script, then run the complete generation:

import requests
import base64

CLIENT_ID = "your_client_id"
CLIENT_SECRET = "your_client_secret"
API_URL = "https://na1.fusion.foxit.com/document-generation/api/GenerateDocumentBase64"

# Read and encode the template
with open("invoice_template.docx", "rb") as f:
    template_b64 = base64.b64encode(f.read()).decode("utf-8")

# Build the data payload
document_values = {
    "customer_name": "Acme Corporation",
    "invoice_number": "INV-2025-0042",
    "invoice_date": "07/15/2025",
    "due_date": "08/14/2025",
    "line_items": [
        {
            "description": "API Integration Consulting",
            "qty": 8,
            "unit_price": 195.00,
            "total": 1560.00
        },
        {
            "description": "Document Automation Setup",
            "qty": 1,
            "unit_price": 750.00,
            "total": 750.00
        }
    ],
    "subtotal": 2310.00,
    "tax_rate": 0.08,
    "tax_amount": 184.80,
    "total_due": 2494.80
}

# Construct the request body
payload = {
    "base64FileString": template_b64,
    "documentValues": document_values,
    "outputFormat": "pdf"
}

headers = {
    "client_id": CLIENT_ID,
    "client_secret": CLIENT_SECRET,
    "Content-Type": "application/json"
}

response = requests.post(API_URL, json=payload, headers=headers)

if response.status_code == 200:
    result = response.json()
    pdf_bytes = base64.b64decode(result["base64FileString"])
    if pdf_bytes[:5] != b"%PDF-":
        raise ValueError("Response did not contain a valid PDF")
    with open("invoice_output.pdf", "wb") as out:
        out.write(pdf_bytes)
    print("PDF written to invoice_output.pdf")
else:
    print(f"Error {response.status_code}: {response.json().get('message')}")

The success response is a JSON object with three keys: base64FileString (the rendered PDF, base64-encoded), fileExtension ("pdf"), and message ("PDF Document Generated Successfully"). Decoding and writing the bytes to disk gives you a complete, formatted PDF with every tag replaced by its corresponding data value. If you omit a key from documentValues, the API renders the corresponding tag as an empty string, producing a blank field in the output.

Advanced Data Scenarios: Arrays, Nested Objects, and Built-In Functions

The two-row invoice above works, but most production documents have more complex data shapes. Three patterns cover the majority of real-world cases.

For multi-row tables, the line_items array in the Python snippet above already shows the basic structure. To generate five rows, pass five objects in the array. The Word template row tagged with {{TableStart:line_items}} and {{TableEnd:line_items}} repeats exactly once per array item:

{
  "line_items": [
    {
      "description": "UX Design Review",
      "qty": 4,
      "unit_price": 150.0,
      "total": 600.0
    },
    {
      "description": "Backend API Development",
      "qty": 12,
      "unit_price": 185.0,
      "total": 2220.0
    },
    {
      "description": "Database Schema Migration",
      "qty": 3,
      "unit_price": 200.0,
      "total": 600.0
    },
    {
      "description": "QA Testing",
      "qty": 6,
      "unit_price": 95.0,
      "total": 570.0
    },
    {
      "description": "Deployment and Documentation",
      "qty": 2,
      "unit_price": 175.0,
      "total": 350.0
    }
  ]
}

The API generates exactly five table rows. Swap in 50 items and you get 50 rows, with page breaks handled by Word’s native pagination logic.

For nested objects, the DocGen API resolves dot-notation paths against the full depth of your JSON structure. A shipping confirmation template referencing {{customer.address.city}} works against this payload without any flattening on your end:

{
  "customer": {
    "name": "Sarah Chen",
    "email": "[email protected]",
    "address": {
      "street": "742 Evergreen Terrace",
      "city": "Portland",
      "state": "OR",
      "postal_code": "97201"
    }
  }
}

In the Word template, {{customer.name}}{{customer.address.city}}, and {{customer.address.postal_code}} each resolve to the correct nested value. You can reference the same nested object from multiple locations in the template, and the API populates each instance independently.

For numeric and date formatting, the DocGen API respects Word’s native field switch syntax. Adding \# Currency to a tag formats a numeric value as a currency string, so {{unit_price \# Currency}} renders 195.00 as \$195.00. Date fields accept \@ "MM/dd/yyyy" to control output format, so {{invoice_date \@ "MM/dd/yyyy"}} formats an ISO date string to 07/15/2025. To auto-calculate a column total, place a SUM(ABOVE) field in the Word table row immediately below {{TableEnd:line_items}} and the API evaluates it against the rendered data rows.

Error Handling and Production Readiness

The DocGen API returns a focused set of HTTP status codes. A 200 confirms successful generation. A 401 means your client_id or client_secret headers are invalid, and the fix is to re-copy the credentials from the Developer Portal. A 400 covers three cases. The first is a malformed request body, for example a missing base64FileString or outputFormat. The second is structural issues with the template itself, such as a {{TableStart}} marker placed outside its table row. The third is an oversize template; DocGen rejects .docx uploads larger than 4 MB, and the fix is to compress embedded images, drop embedded fonts, or split the template before re-encoding. The message field in every non-200 response body gives you the specific reason, so log it rather than discarding the response object.

A production wrapper handles all three cases and adds exponential backoff for transient server errors:

import requests
import base64
import time

def generate_document(client_id, client_secret, template_path,
                      document_values, output_format="pdf"):
    API_URL = "https://na1.fusion.foxit.com/document-generation/api/GenerateDocumentBase64"

    with open(template_path, "rb") as f:
        template_b64 = base64.b64encode(f.read()).decode("utf-8")

    payload = {
        "base64FileString": template_b64,
        "documentValues": document_values,
        "outputFormat": output_format
    }
    headers = {
        "client_id": client_id,
        "client_secret": client_secret,
        "Content-Type": "application/json"
    }

    max_retries = 3
    for attempt in range(max_retries):
        try:
            response = requests.post(API_URL, json=payload,
                                     headers=headers, timeout=30)

            if response.status_code == 200:
                return base64.b64decode(response.json()["base64FileString"])

            if response.status_code == 401:
                raise ValueError("Authentication failed: re-check client_id and client_secret")

            if response.status_code == 400:
                msg = response.json().get("message", "Bad request")
                raise ValueError(f"Request error: {msg}")

            if response.status_code >= 500:
                if attempt < max_retries - 1:
                    wait = 2 ** attempt
                    print(f"Server error ({response.status_code}), retrying in {wait}s...")
                    time.sleep(wait)
                    continue
                raise RuntimeError(f"Server error after {max_retries} attempts")

        except requests.exceptions.Timeout:
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)
                continue
            raise

    raise RuntimeError("Max retries exceeded")

The wrapper raises immediately on 4xx responses because retrying a credential error or a malformed request produces the same result. Exponential backoff applies only to 5xx responses and timeouts, where the issue is transient.

Once generate_document() returns raw PDF bytes, routing them downstream takes three lines:

import boto3

s3 = boto3.client("s3")
pdf_bytes = generate_document(CLIENT_ID, CLIENT_SECRET, "invoice_template.docx", document_values)
s3.put_object(Bucket="my-documents-bucket", Key="invoices/INV-2025-0042.pdf", Body=pdf_bytes)

To attach the output to an email, pass pdf_bytes directly as the smtplib attachment payload. To collect a signature on the generated document, base64-encode the bytes and POST them to Foxit’s eSign API with the signer’s email address in the request body. The full eSign API reference is at docs.developer-api.foxit.com.

Common Mistakes

A short list of the issues that account for almost every failed first run.

  • Smart-quote autocorrect on braces. Word’s AutoCorrect can convert the second { of {{ into a curly-quote glyph, which breaks tag parsing silently. Disable “Straight quotes with smart quotes” under AutoCorrect Options, or paste tags as plain text.
  • Token case sensitivity. {{Customer_Name}} and {{customer_name}} are different keys. Match the casing in your JSON exactly.
  • TableStart and TableEnd must sit in the same Word table row. Splitting them across two rows, or placing either marker outside the table, leaves the loop unrendered with no error.
  • Template over 4 MB. The API rejects oversize uploads with a 400. Compress embedded images, drop embedded fonts where system fonts will do, or split the template into smaller pieces.
  • Missing payload key. The API renders an unmatched tag as an empty string rather than failing, so a 200 response does not guarantee every field is populated. Spot-check the rendered PDF as part of any pipeline test.
  • Auth header typos. Headers are client_id and client_secret in snake_case. Client-IdClientId, or X-Client-Id all return 401.

Run the Full Invoice Example End-to-End Right Now

Create a free account directly at account.foxit.com/site/sign-up. This skips the pricing-page redirect you hit from the marketing site and drops you straight into the account form.

  1. Open account.foxit.com/site/sign-up and complete the form (no credit card required).
  2. After verification, sign in to the Developer Portal and the Developer plan (500 credits per year) is active by default.
  3. Open the API Keys section and copy your client_id and client_secret.

With credentials in hand, run the example end-to-end:

  1. Download invoice_full.docx from the foxit-demo-templates repo and save it locally as invoice_template.docx in your working directory. The file is well under the 4 MB upload limit and exercises every tag pattern this article covers.
  2. Paste your credentials into the CLIENT_ID and CLIENT_SECRET variables in the Python script from the previous section.
  3. Edit the document_values dictionary with your own customer name, invoice number, and line items.
  4. Run the script and open invoice_output.pdf.

The free Developer plan’s 500 annual credits cover this tutorial dozens of times over before you spend anything. The full API reference at docs.developer-api.foxit.com covers every endpoint parameter, the complete tag specification, all supported output formats, and the full GenerateDocumentBase64 request and response schema.

Get started with a free account (no credit card required) and generate your first dynamic PDF in under 10 minutes.

Generate Dynamic PDFs from JSON using Foxit APIs

Generate Dynamic PDFs from JSON using Foxit APIs

See how easy it is to generate PDFs from JSON using Foxit’s Document Generation API. With Word as your template engine, you can dynamically build invoices, offer letters, and agreements—no complex setup required. This tutorial walks through the full process in Python and highlights the flexibility of token-based document creation.

Generate Dynamic PDFs from JSON using Foxit APIs

One of the more fascinating APIs in our library is the Document Generation API. This document generation API lets you create dynamic PDFs or Word documents using your own data as templates. That may sound simple – and the code you’re about to see is indeed simple – but the real power lies in how flexible Word can be as a template engine. This API could be used for:

All of this is made available via a simple API and a “token language” you’ll use within Word to create your templates. Whether you’re feeding in data from a database, a form submission, or a JSON API response, the process looks the same from your Python script. Let’s take a look at how this is done.

Credentials

Before we go any further, head over to our developer portal and grab a set of free credentials. This will include a client ID and secret values – you’ll need both to make use of the API.

Don’t want to read all of this? You can also follow along by video:

Using the API

The Document Generation API flow is a bit different from our PDF Services APIs in that the execution is synchronous. You don’t need to upload your document beforehand or download a result. You simply call the API (passing your data and template) and the result has your new PDF (or Word document). With it being this simple, let’s get into the code.

Loading Credentials

My script begins by loading in the credentials and API root host via the environment:

CLIENT_ID = os.environ.get('CLIENT_ID')
CLIENT_SECRET = os.environ.get('CLIENT_SECRET')
HOST = os.environ.get('HOST')

As always, try to avoid hard coding credentials directly into your code.

Calling the API

The endpoint only requires you to pass the output format, your data, and a base64 version of your file. “Your data” can be almost anything you like—though it should start as an object (i.e., a dictionary in Python with key/value pairs). Beneath that, anything goes: strings, numbers, arrays of objects, and so on.

Here’s a Python wrapper showing this in action:

def docGen(doc, data, id, secret):
    
    headers = {
        "client_id":id,
        "client_secret":secret
    }

    body = {
        "outputFormat":"pdf",
        "documentValues": data,  
        "base64FileString":doc
    }

    request = requests.post(f"{HOST}/document-generation/api/GenerateDocumentBase64", json=body, headers=headers)
    return request.json()

And here’s an example calling it:

with open('../../inputfiles/docgen_sample.docx', 'rb') as file:
    bd = file.read()
    b64 = base64.b64encode(bd).decode('utf-8')

data = {
    "name":"Raymond Camden", 
    "food": "sushi",
    "favoriteMovie": "Star Wars",
    "cats": [
        {"name":"Elise", "gender":"female", "age":14 },
        {"name":"Luna", "gender":"female", "age":13 },
        {"name":"Crackers", "gender":"male", "age":13 },
        {"name":"Gracie", "gender":"female", "age":12 },
        {"name":"Pig", "gender":"female", "age":10 },
        {"name":"Zelda", "gender":"female", "age":2 },
        {"name":"Wednesday", "gender":"female", "age":1 },
    ],
}

result = docGen(b64, data, CLIENT_ID, CLIENT_SECRET)

You’ll note here that my data is hard-coded. In a real application, this would typically be dynamic—read from the file system, queried from a database, or sourced from any other location.

The result object contains a message representing the success or failure of the operation, the file extension for the result, and the base64 representation of the result. To turn that base64 string back into a file, decode it first:

b64_bytes = result["base64FileString"].encode('ascii')
binary_data = base64.b64decode(b64_bytes)

Most likely you’ll always be outputting PDFs, so here’s a simple bit of code that stores the result:

with open('../../output/docgen_sample.pdf', 'wb') as file:
    file.write(binary_data)
    print('Done and stored to ../../output/docgen_sample.pdf')

There’s a bit more to the API than I’ve shown here so be sure to check the docs, but now it’s time for the real star of this API, Word.

Using Word as a Template

I’ve probably used Microsoft Word for longer than you’ve been alive and I’ve never really thought much about it. But when you begin to think of a simple Word document as a template, all of a sudden the possibilities begin to excite you. In our Document Generation API, the template system works via simple “tokens” in your document marked by opening and closing double brackets.

Consider this block of text:

See how name is surrounded by double brackets? And food and favoriteMovie? When this template is sent to the API along with the corresponding values, those tokens are replaced dynamically. In the screenshot, notice how favoriteMovie is bolded. That’s fine. You can use any formatting, styling, or layout options you wish.

That’s one example, but you also get some built-in values as well. For example, including today as a token will insert the current date, and can be paired with date formatting to specify how the date looks:

Remember the array of cats from earlier? You can use that to create a table in Word like this:

Notice that I’ve used two new tags here, TableStart and TableEnd, both of which reference the array, cats. Then in my table cells, I refer to the values from that array. Again, the color you see here is completely arbitrary and was me making use of the entirety of my Word design skills.

Here’s the template as a whole to show you everything in context:

The Result

Given the code shown above with those values, and given the Word template just shared, once passed to the API, the following PDF is created:

What About Converting PDF to JSON?

So far we’ve been going one direction: JSON data in, PDF out. But what if you need to go the other way—extract structured content from a PDF and work with it in your application?

Foxit’s PDF Services API includes an Extract endpoint that handles exactly this. You upload a PDF, specify whether you want TEXT, IMAGE, or PAGE-level data, and the API returns the extracted content. The text output is particularly useful if you want to feed the result into a data pipeline, search index, or AI workflow.

Here’s a quick look at how extraction works in Python. First, upload your PDF:

def uploadDoc(path, id, secret):
    headers = {
        "client_id":id,
        "client_secret":secret
    }
    with open(path, 'rb') as f:
        files = {'file': (path, f)}
        request = requests.post(f"{HOST}/pdf-services/api/documents/upload", files=files, headers=headers)
    return request.json()

doc = uploadDoc("../../inputfiles/input.pdf", CLIENT_ID, CLIENT_SECRET)

Then call the Extract endpoint with the document ID and the type of content you want. The result comes back in a structured format you can parse, store, or pass along to other tools—including an LLM if you’re building an AI document pipeline.

You can read a full walkthrough in our PDF text extraction guide.

Ready to Try?

If this looks cool, be sure to check the docs for more information about the template language and API. Sign up for some free developer credentials and reach out on our developer forums with any questions.

If you’re building AI agents or LLM-powered workflows, Foxit also offers an MCP server that lets you connect your agents directly to Foxit PDF Services—so your AI tools can generate, extract, and process documents without any custom glue code.

Want the code? Get it on GitHub (Python).

If you are more of a Node person, check out that version. Get it on GitHub (Node.js).

Building Auditable, AI-Driven Document Workflows with Foxit APIs

Building Auditable, AI-Driven Document Workflows with Foxit APIs

We had an incredible time at API World 2025 connecting with developers, sharing ideas, and seeing how Foxit APIs power everything from AI-driven resume builders to interactive doodle apps. In this post, we’ll walk through the same hands-on workflow Jorge Euceda demoed live on stage—showing how to build an auditable, AI-powered document automation system using Foxit PDF Services and Document Generation APIs.

This year’s API World was packed with energy—and it was amazing meeting so many developers face-to-face at the Foxit booth. We spent three days trading ideas about document automation, AI workflows, and integration challenges.

Our team hosted a hands-on workshop and sponsored the API World Hackathon, where developers submitted 16 high-quality projects built with Foxit APIs. Submissions ranged from:

  • Automated legal-advice generators

  • Compatibility-rating apps that analyze your personality match

  • AI-powered resume optimizers that tailor your CV to dream-job descriptions

  • Collaborative doodle games that turn drawings into shareable PDFs

Each project offered a new perspective on what’s possible with Foxit APIs—and we loved seeing the creativity.

Among all the sessions, Jorge Euceda’s workshop stood out as a crowd favorite. It showed how to make AI document decisions auditable, explainable, and replayable using event sourcing and two key Foxit APIs. That’s exactly what we’ll walk through below.

Click here to grab the project overview file.

Prefer to follow along with the live session instead of reading step-by-step?
Watch Jorge’s complete “AI-Powered Resume to Report” presentation from API World 2025.
It includes every step shown below—plus real-time API responses.

What You’ll Build

A complete, auditable workflow:

Resume Upload → Extract Resume Data → AI Candidate Scoring → Generate HR Report → Event Store

This workshop is designed for technical professionals and managers who want to learn how to use application programming interfaces (APIs) and explore how AI can enhance document workflows. Attendees will get hands-on experience with Foxit’s PDF Services (extraction/OCR) and Document Generation APIs, and see how event sourcing turns AI decisions into an auditable, replayable ledger.

By the end, you’ll have a Python-based demo that extracts data from a PDF resume, analyzes it against a policy, and generates a polished HR Report PDF with a traceable event log.

Getting Set Up

To follow along, you’ll need:

  • Access to a terminal with a Python 3.9+ Environment and internet connectivity

  • Visual Studio Code or your preferred IDE

  • Basic familiarity with REST/JSON (helpful but not required)

 

  1. Install Dependencies
python -V
# virtual environment setup, requests installation
python3 -m venv myenv
source myenv/bin/activate
pip3 install requests
  1. Download the project’s zip file below

Project Source Code

Now extract the files somewhere in your computer, open in Visual Studio Code or your preferred IDE.

You may use any sample resume PDF for inputs/input_resume.pdf. A sample one is provided, but you may leverage any resume PDF you wish to generate a report on.

  1. Create a Foxit Account for credentials

Create a Free Developer Account now or navigate to our getting started guide, which will go over how to create a free trial.

Hands-On Walkthrough

Step 1 – Open the Project

Now that you’ve downloaded the workshop source code, navigate to the resume_to_report.py file, which will serve as our main entry point.

Once dependencies are installed and the ZIP file extracted, open your workspace and run:

python3 resume_to_report.py

You should see console logs showing:

  • An AI Report printed as JSON

  • A generated PDF (outputs/HR_Report.pdf)

  • An event ledger (outputs/events.json) with traceable actions

Step 2 — Inspect the outputs

Open the generated HR report to review:

  • Candidate name and phone

  • Overall fit score

  • Matching skills & gaps

  • Summary and policy reference in the footer

Then open events.json to see your audit trail—each entry captures the AI’s decision context.

{
  "eventType": "DecisionProposed",
  "traceId": "8d1e4df6-8ac9-4f31-9b3a-841d715c2b1c",
  "payload": {
    "fitScore": 82,
    "policyRef": "EvaluationPolicy#v1.0"
  }
}

This is your audit trail.

Step 3 — Replay & Explain a Policy Change

Replay demonstrates why event-sourcing matters:

  1. Edit inputs/evaluation_policy.json: add a hard requirement (e.g., "kubernetes") or adjust the job_description emphasis.

  2. Re-run the script with the same resume.

  3. Compare:

    • New decision and updated PDF content

    • Event log now reflects the updated rationale (PolicyLoaded snapshot → new DecisionProposed with the same traceId lineage)

  4. Emphasize: The input resume hasn’t changed; only policy did — the event ledger explains the difference.

Policy: Drive Auditable & Replayable Decisions

The AI assistant uses a JSON policy file to control how it scores, caps, and summarizes results. Every policy snapshot is logged as its own event, creating a replayable audit trail for governance and compliance.

 

{
  "policyId": "EvaluationPolicy#v1.0",
  "job_description": "Looking for a software engineer with expertise in C++, Python, and AWS cloud services. Experience building scalable applications in agile teams; familiarity with DevOps and CI/CD.",
  "overall_summary": "Make the summary as short as possible",
  "hard_requirements": ["C++", "python", "aws"]
}

Notes:

  • policyId appears in both the report and event log.

  • job_description defines what the AI is looking for.

  • Changing these values creates a new traceable event.

Generate a Polished Report

Next, use the Foxit Document Generation API to fill your Word template and create a formatted PDF report.

Open inputs/hr_report_template.docx, you will find the following HR reporting template with placeholders for the fields we will be entering:

Tips:

  • Include lightweight branding (logo/header) to make the generated PDF presentation-ready.

  • Include a footer with traceable Policy ID and Trace ID Events

Results and Audit Trail

Here’s what the final HR Report PDF looks like:

Every decision has a Trace ID and Policy Ref, so you can recreate the report at any time and verify how the AI arrived there.

Why Event-Sourced AI Matters

This pattern does more than score resumes—it proves that AI decisions can be transparent, deterministic, and trustworthy.
By using Foxit APIs to extract, analyze, and generate documents, developers can bring auditability to any workflow that relies on machine logic.

Key Takeaways

  • Auditability – Every AI step emits a verifiable event.

  • Replayability – Change a policy and regenerate for deterministic results.

  • Explainability – Decisions carry policy and trace references for clear “why.”

  • Automation – PDF Services and Document Generation handle the document lifecycle end-to-end.

Try It Yourself

Ready to build your own auditable AI workflow?

Closing Thought

At API World, we set out to show how Foxit APIs can power real, transparent AI workflows—and the community response was incredible. Whether you’re building for HR, legal, finance, or creative industries, the same pattern applies:

Make your AI explain itself.

Start with the Foxit APIs, experiment with policies, and turn every AI decision into a traceable event that builds trust.

Create Custom Invoices with Word Templates and Foxit Document Generation

Create Custom Invoices with Word Templates and Foxit Document Generation

Invoicing is a critical part of any business. This tutorial shows how to automate the process by creating dynamic, custom PDF invoices with the Foxit Document Generation API. Learn how to design a Microsoft Word template with special tokens, prepare your data in JSON, and then use a simple Python script to generate your final invoices.

Create Custom Invoices with Word Templates and Foxit Document Generation

Invoicing is a critical part of any business, often involving multiple steps—gathering customer data, calculating amounts owed, and sending out invoices so your company can get paid. Foxit’s Document Generation API streamlines this process by making it easy to create well-formatted, dynamic PDF invoices. Let’s walk through an example.

Before You Start

If you want to follow along with this blog post, be sure to get your free credentials over on our developer portal. Also, read our introductory blog post, which covers the basics of working with our API.

As a reminder, the API makes use of Microsoft Word templates. These templates are essentials tokens wrapped in double brackets. When you call the API, you’ll pass the template and your data. Our API then dynamically replaces those tokens with your data and returns you a nice PDF (you can also get a Word file back as well).

Creating Your Custom Invoice with Word Templates

Let’s begin by designing the template in Word. An invoice typically includes things like:

  • The customer receiving the invoice
  • The invoice number and issue date
  • The payment due date
  • A detailed list of items, including name, quantity, and price for each line item, with a total at the end

The Document Generation API makes no requirements in terms of how you design your templates. Size, alignment, and so forth, can match your corporate styles and be as fancy, or simple, as you like. Let’s consider the template below (I’ll link to where you can download this file at the end of the article):

MS Word template

Let's break it down from the top.

  • The first token, {{ invoiceNum }}, represents the invoice number for the customer.
  • The next token is special. {{ today \@ MM/dd/yyyy }} represents two different features of the Document Generation API. First, today is a special value representing the present time, or more accurately, when you call the API. The next portion represents a date mask for representing a date value. Our docs have a list of available masks.
  • {{ accountName }} is another regular token.
  • The payment date, {{ paymentDueDate \@ MM/dd/yyyy }}, shows how the date mask feature can be used on dates in your own data as well.
  • Now let's look at the table. You can format tables however you like, but a common setup includes one row for the header and one row for the dynamic data. (In this example, there’s also a third row, which I'll explain shortly.) To start, you’ll use a marker tag: {{TableStart:lineItems}}, where lineItems represents an array in your data. The row ends with the matching {{TableEnd:lineItems}} tag. Between these two tags, you'll place additional tags for each value in the array. For example, we have a product, qty, price, and totalPrice for each item. You'll also see the special ROW_NUMBER value, which automatically counts each row starting at 1. Finally, the \# Currency format is applied to the totalPrice value to display it as a currency.
  • The last row in the table uses two special features together, namely SUM(ABOVE), which maps to creating a total of the last column from the table. This can be paired with currency formatting as shown.

Alright, now that you've seen the template, let's talk data!

The Data for Your Custom Invoices

Usually the data for an operation like this would come from a database, or perhaps an API with an ecommerce system. For this demo, the data will come from a simple JSON file. Let's take a look at it:

[
{
	"invoiceNum":100, 
	"accountName":"Customer Alpha", 
	"accountNumber":1,
	"paymentDueDate":"August 15, 2025",
	"lineItems":[
		{"product":"Product 1", "qty":5, "price":2, "totalPrice":10},
		{"product":"Product 5", "qty":3, "price":9, "totalPrice":18},
		{"product":"Product 4", "qty":1, "price":50, "totalPrice":50},
		{"product":"Product X", "qty":2, "price":15, "totalPrice":30}
	]
},
{
	"invoiceNum":25, 
	"accountName":"Customer Beta", 
	"accountNumber":2,
	"paymentDueDate":"August 15, 2025",
	"lineItems":[
		{"product":"Product 2", "qty":9, "price":2, "totalPrice":18},
		{"product":"Product 4", "qty":1, "price":8, "totalPrice":8},
		{"product":"Product 3", "qty":10, "price":25, "totalPrice":250},
		{"product":"Product YY", "qty":3, "price":15, "totalPrice":45},
		{"product":"Product AA", "qty":2, "price":100, "totalPrice":200}
	]
},
{
	"invoiceNum":51, 
	"accountName":"Customer Gamma", 
	"accountNumber":3,
	"paymentDueDate":"August 15, 2025",
	"lineItems":[
		{"product":"Product 9", "qty":1, "price":2, "totalPrice":2},
		{"product":"Product 23", "qty":30, "price":9, "totalPrice":270},
		{"product":"Product ZZ", "qty":6, "price":15, "totalPrice":90}
	]
}
]

The data consists of an array of 3 sets of invoice data. Each set follows the same pattern and matches what you saw above in the Word template. The only exception being the accountNumber value which wasn't used in the template. That's fine – sometimes your data will include things not necessary for the final PDF. In this case, though, we're actually going to make use of it (you'll see in a moment). Onward to code!

Calling the Foxit API with Our Data

Now for my favorite part – actually calling the API. The Generate Document API is incredibly simple; needing just your credentials, a base64 version of the template, and your data. The entire demo is slightly over 50 lines of Python code, so let's look at the template and then break it down.

import os
import requests
import sys 
from time import sleep 
import base64 
import json 
from datetime import datetime

CLIENT_ID = os.environ.get('CLIENT_ID')
CLIENT_SECRET = os.environ.get('CLIENT_SECRET')
HOST = os.environ.get('HOST')

def docGen(doc, data, id, secret):
	
	headers = {
		"client_id":id,
		"client_secret":secret
	}

	body = {
		"outputFormat":"pdf",
		"documentValues": data,  
		"base64FileString":doc
	}

	request = requests.post(f"{HOST}/document-generation/api/GenerateDocumentBase64", json=body, headers=headers)

	return request.json()

with open('invoice.docx', 'rb') as file:
	bd = file.read()
	b64 = base64.b64encode(bd).decode('utf-8')

with open('invoicedata.json', 'r') as file:
	data = json.load(file)

for invoiceData in data:
	result = docGen(b64, invoiceData, CLIENT_ID, CLIENT_SECRET)

	if result["base64FileString"] == None:
		print("Something went wrong.")
		print(result)
		sys.exit()

	b64_bytes = result["base64FileString"].encode('ascii')
	binary_data = base64.b64decode(b64_bytes)

	filename = f"invoice_account_{invoiceData["accountNumber"]}.pdf"

	with open(filename, 'wb') as file:
		file.write(binary_data)
		print(f"Done and stored to {filename}")

After importing the necessary modules and loading credentials from the environment, we define a simple docGen method. This method takes the template, data, and credentials, then calls the API endpoint. The API responds with the rendered PDF in Base64 format, which the method returns.

The main code of the template breaks down to:

  • Reading in the template and converting it to base64.
  • Reading in the JSON file
  • Iterating over each block of invoice data and calling the API
  • Remember how I said accountNumber wasn't used in the template? We actually use it here to generate a unique filename. Technically, you don't need to store the results at all. You could take the raw binary data and email it. But having a copy of the results does mean you can re-use it later, such as if the customer is late to pay.

Here's an example of one of the results:

Example PDF result

Next Steps

If you want to try this demo yourself, first grab yourself a shiny free set of credentials and then head over to our GitHub to grab the template, Python, and sample output values yourself.

Convert Office Docs to PDFs Automatically with Foxit PDF Services API

Convert Office Docs to PDFs Automatically with Foxit PDF Services API

See how to build a powerful, automated workflow that converts Office documents (Word, Excel, PowerPoint) into PDFs. This step-by-step guide uses the Foxit PDF Services API, the Pipedream low-code platform, and Dropbox to create a seamless “hands-off” document processing system. We’ll walk through every step, from triggering on a new file to uploading the final PDF.

Convert Office Docs to PDFs Automatically with Foxit PDF Services API

With our REST APIs, it is now possible for any developer to set up an integration and document workflow using their language of choice. But what about workflow automations? Luckily, this is even simpler (of course, depending on platform) as you can rely on the workflow service to handle a lot the heavy lifting of whatever automation needs you may have. In this blog post, I’m going to demonstrate a workflow making use of Pipedream. Pipedream is a low-code platform that lets you build flexible workflows by piecing together various small atomic steps. It’s been a favorite of mine for some time now, and I absolutely recommend it. But note that what I’ll be showing here today could absolutely be done on other platforms, like n8n.

Want the televised version? Catch the video below:

Our Office Document to PDF Workflow

Our workflow is based on Dropbox folders and handles automatic conversion of Office docs to PDFs. To support that, it does the following:

  • Listen for new files in a Dropbox folder
  • Do a quick sanity check (is it in the input subdirectory and an Office file)
  • Download the file to Pipedream
  • Send it to Foxit via the Upload API
  • Kick off the appropriate conversion based on the Office type
  • Check status via the Status API
  • When done, download the result to Pipedream
  • And finally, push it up to Dropbox in an output subdirectory

Here’s a nice graphical representation of this workflow:

Workflow chart

Before we get into the code, note that workflow platforms like Pipedream are incredibly flexible. When I build workflows with platforms like this I try to make each step as atomic, and focused as possible. I could absolutely have built a shorter, more compact version of this workflow. However, having it broken out like this makes it easier to copy and modify going forward (which is exactly how this one came about, it was based on a simpler, earlier version).

Ok, let's break it down, step-by-step.

Getting Triggered

In Pipedream, workflows begin with a trigger. While there are many options for this, my workflow uses a "New File From Dropbox" trigger. I logged into Dropbox via Pipedream so it had access to my account. I then specified a top level folder, "Foxit", for the integration. Additionally, there are two more important settings:

  • Recursive – this tells the trigger to file for any new file under the root directory, "Foxit". My Dropbox Foxit folder has both an input and output directory.
  • Include Link – this tells Pipedream to ensure we get a link to the new file. This is required to download it later.
Trigger details

Filtering the Document Flow

The next two steps are focused on filtering and stopping the workflow, if necessary. The first, end_if_output, is a built-in Pipedream step that lets me provide a condition for the workflow to end. First, I'll check the path value from the trigger (the path of the new file) and if it contains "output", this means it's a new file in the output directory and the workflow should not run.

Declaring the end condition

The next filter is a code step that handles two tasks. First, it checks whether the new file is a supported Office type—.docx, .xlsx, or .pptx—using our APIs. If the extension isn’t one of these, the workflow ends programmatically.

Later in the workflow, I’ll also need that same extension to route the request to the correct endpoint. So the code handles both: validation and preservation of the extension.

import os 

def handler(pd: "pipedream"):
  base, extension = os.path.splitext(pd.steps['trigger']['event']['name'])

  if extension == ".docx":
    api = "/pdf-services/api/documents/create/pdf-from-word"
  elif extension == ".xlsx":
    api = "/pdf-services/api/documents/create/pdf-from-excel"
  elif extension == ".pptx":
    api = "/pdf-services/api/documents/create/pdf-from-ppt"
  else:
    return pd.flow.exit(f"Exiting workflow due to unknow extension: {extension}.")

  return { "api":api }

As you can see, if the extension isn't valid, I'm exiting the workflow using pd.flow.exit (while also logging out a proper message, which I can check later via the Pipedream UI). I also return the right endpoint if a supported extension was used. This will be useful later in the flow.

Download and Upload API Data

The next two steps are primarily about moving data from the input source (Dropbox) to our API (Foxit).

The first step, download_to_tmp, uses a simple Python script to transfer the Dropbox file into the /tmp directory for use in the workflow

import requests

def handler(pd: "pipedream"):
    download_url = pd.steps["trigger"]["event"]["link"]
    file_path = f"/tmp/{pd.steps['trigger']['event']['name']}"

    with requests.get(download_url, stream=True) as response:
      response.raise_for_status()
      with open(file_path, "wb") as file:
          for chunk in response.iter_content(chunk_size=8192):
            file.write(chunk)
            
    return file_path

Notice at the end that I return the path I used in Pipedream. This action then leads directly into the next step of uploading to Foxit via the Upload API:

import os 
import requests 

def handler(pd: "pipedream"):
  clientid = os.environ.get('FOXIT_CLIENT_ID')
  secret = os.environ.get('FOXIT_CLIENT_SECRET')
  HOST = os.environ.get('FOXIT_HOST')
  
  headers = {
    "client_id":clientid,
    "client_secret":secret
  }

  with open(pd.steps['download_to_tmp']['$return_value'], 'rb') as f:
    files = {'file': (pd.steps['download_to_tmp']['$return_value'], f)}

    request = requests.post(f"{HOST}/pdf-services/api/documents/upload", files=files, headers=headers)

    return request.json()

The result of this will be a documentId value that looks like so:

{
  "documentId": "<string>"
}

Pipedream lets you define environment variables and I've made use of them for my Foxit credentials and host. Grab your own free credentials here!

Converting the Document Using the Foxit API

The next step will actually kick off the conversion. My workflow supports three different input types (Word, PowerPoint, and Excel). These map to three API endpoints. But remember that earlier we sniffed the extension of our input and set the endpoint there. Since all three APIs work the same, that's literally all we need to do – hit the endpoint and pass the document value from the previous step.

import os 
import requests 

def handler(pd: "pipedream"):

  clientid = os.environ.get('FOXIT_CLIENT_ID')
  secret = os.environ.get('FOXIT_CLIENT_SECRET')
  HOST = os.environ.get('FOXIT_HOST')
  
  headers = {
    "client_id":clientid,
    "client_secret":secret,
    "Content-Type":"application/json"
  }

  body = {
    "documentId": pd.steps['upload_to_foxit']['$return_value']['documentId']
  }

  api = pd.steps['extension_check']['$return_value']['api']
  
  print(f"{HOST}{api}")
  request = requests.post(f"{HOST}{api}", json=body, headers=headers)
  return request.json()
The result of this call, and nearly all of the Foxit APIs, will be a task:
{
  "taskId": "<string>"
}

Checking Your Document API Status

The next step is one that may take a few seconds – checking the job status. Foxit's endpoint returns a value like so:

{
  "taskId": "<string>",
  "status": "<string>",
  "progress": "<int32>",
  "resultDocumentId": "<string>",
  "error": {
    "code": "<string>",
    "message": "<string>"
  }
}
To use this, I just hit the API, check for status, and if it’s not done, wait five seconds and call it again. Here’s the Python code for this:
import os 
import requests 
from time import sleep 

def handler(pd: "pipedream"):

  clientid = os.environ.get('FOXIT_CLIENT_ID')
  secret = os.environ.get('FOXIT_CLIENT_SECRET')
  HOST = os.environ.get('FOXIT_HOST')
  
  headers = {
    "client_id":clientid,
    "client_secret":secret,
    "Content-Type":"application/json"
  }

  done = False
  while done is False:

    request = requests.get(f"{HOST}/pdf-services/api/tasks/{pd.steps['create_conversion_job']['$return_value']['taskId']}", headers=headers)
    status = request.json()
    if status["status"] == "COMPLETED":
      done = True
      return status
    elif status["status"] == "FAILED":
      print("Failure. Here is the last status:")
      print(status)
      return pd.flow.exit("Failure in job")
    else:
      print(f"Current status, {status['status']}, percentage: {status['progress']}")
      sleep(5)

As shown, errors are simply logged by default—but you could enhance this by adding notifications, such as emailing an admin, sending a text message, or other alerts.

On success, the final output is passed along, including the key value we care about: resultDocumentId.

Download and Upload – Again

Ok, if the workflow has gotten this far, it's time to finish the process. The next step handles downloading the result from Foxit using the download endpoint:

import requests
import os

def handler(pd: "pipedream"):
  clientid = os.environ.get('FOXIT_CLIENT_ID')
  secret = os.environ.get('FOXIT_CLIENT_SECRET')
  HOST = os.environ.get('FOXIT_HOST')

  headers = {
    "client_id":clientid,
    "client_secret":secret,
  }

  # Given a file of input.docx, we need to use input.pdf
  base_name, _ = os.path.splitext(pd.steps['trigger']['event']['name'])
  path = f"/tmp/{base_name}.pdf"
  print(path) 
  
  with open(path, "wb") as output:
		
    bits = requests.get(f"{HOST}/pdf-services/api/documents/{pd.steps['check_job']['$return_value']['resultDocumentId']}/download", stream=True, headers=headers).content 
    output.write(bits)
            
    return {
      "filename":f"{base_name}.pdf",
      "path":path
    }

Note that I'm using the base name of the input, which is basically the filename minus the extension. So for example, input.docx will become input, which I then slap a pdf extension on to create the filename used to store locally to Pipedream.

Finally, I push the file back up to Dropbox, but for this, I can use a built-in Pipedream step that can upload to Dropbox. Here's how I configured it:

  • Path: Once again, Foxit
  • File Name: This one's a bit more complex, I want to store the value in the output subdirectory, and ensure the filename is dynamic. Pipedream lets you mix and match hard-coded values and expressions. I used this to enable that: output/{{steps.download_result_to_tmp.$return_value.filename}}. In this expression the portion inside the double bracket will be dynamic based on the PDF file generated previously.
  • File Path: This is an expression as well, pointing to where I saved the file previously: {{steps.download_result_to_tmp.$return_value.path}}
  • Mode: Finally, the mode attribute specifies what to do on a conflict. This setting will be based on whatever your particular workflow needs are, but for my workflow, I simply told Dropbox to overwrite the existing file.

Here's how that step looks configured in Pipedream:

Upload step

Conclusion

Believe it or not, that's the entire workflow. Once enabled, it runs in the back ground and I can simply place any files into my Dropbox folder and my Office docs will be automatically converted. What's next? Definitely get your own free credentials and check out the docs to get started. If you run into any trouble at all, hit is up on the forums and we'll be glad to help!

Introducing PDF APIs from Foxit

Introducing PDF APIs from Foxit

Get started with Foxit’s new PDF APIs—convert Word to PDF, generate documents, and embed files using simple, scalable REST APIs. Includes sample Python code and walkthrough.

Introducing PDF APIs from Foxit

At the end of June, Foxit introduced a brand-new suite of tools to help developers work with documents. These APIs cover a wide range of features, including:

    • Convert between Office document formats and PDF files seamlessly
    • Optimize, manipulate, and secure PDFs with advanced APIs
    • Generate dynamic documents using Microsoft Word templates
    • Extract text and images from PDFs with powerful tools
    • Embed PDFs into web pages in a context-aware, controlled manner
    • Integrate with eSign APIs for streamlined signature workflows


These APIs are simple to use, and best of all, follow the “don’t surprise me” principal of development. In this post, I’m going to demonstrate one simple example – converting a Word document to PDF – but you can rest assured that nearly all the APIs will follow incredibly similar patterns. I’ll be using Python for my examples here, but will link to a Node.js version of the same example. And given that we’re talking REST APIs here, any language is welcome to join the document party. Let’s dive in.

Credentials

Before we go any further, head over to our developer portal and grab a set of free credentials. This will include a client ID and secret values you’ll need to make use of the API.

Don’t want to read all of this? You can also follow along by video:

API Flow

As I mentioned above, most of the PDF Services APIs will follow a similar flow. This comes down to:

  • Upload your input (like a Word document)
  • Kick off a job (like converting to PDF)
  • Check the job (hey, how ya doin?)
  • Download the result

Or, in pretty graphical format –

The great thing is, once you’ve completed one integration (this post focuses on converting Word to PDF), switching to another is easy—and much of your existing code can be reused. A lazy developer is happy developer! Let’s get started.

Loading Credentials

My script begins by loading the credentials and API root host via the environment:

CLIENT_ID = os.environ.get('CLIENT_ID')
CLIENT_SECRET = os.environ.get('CLIENT_SECRET')
HOST = os.environ.get('HOST')

It’s never a good idea to hard-code credentials in your code. But if you do it this one time, I won’t tell. Honest.

Uploading Your Input

As I mentioned, in this example we’ll be making use of the Word to PDF API. Our input will be a Word document, which we’ll upload to Foxit using the upload API. This endpoint is fairly simple – aside from your credentials, all you need to provide is the binary data of the input file. Here’s the method I created to make this process easier:

def uploadDoc(path, id, secret):
    
    headers = {
        "client_id":id,
        "client_secret":secret
    }

    with open(path, 'rb') as f:
        files = {'file': (path, f)}

        request = requests.post(f"{HOST}/pdf-services/api/documents/upload", files=files, headers=headers)
        return request.json()

And here’s how it’s used:

doc = uploadDoc("../../inputfiles/input.docx", CLIENT_ID, CLIENT_SECRET)
print(f"Uploaded doc to Foxit, id is {doc['documentId']}")

The upload API only returns one value, a documentId, which we can use in future calls.

Starting the Job

Each API operation is a job creator. By this I mean you call the endpoint and it begins your action. For Word to PDF, the only required input is the document ID from the previous call. We can build a nice little wrapper function like so:

def convertToPDF(doc, id, secret):
    
    headers = {
        "client_id":id,
        "client_secret":secret,
        "Content-Type":"application/json"
    }

    body = {
        "documentId":doc	
    }

    request = requests.post(f"{HOST}/pdf-services/api/documents/create/pdf-from-word", json=body, headers=headers)
    return request.json()

And then call it like so:

task = convertToPDF(doc["documentId"], CLIENT_ID, CLIENT_SECRET)
print(f"Created task, id is {task['taskId']}")

The result of this call, if no errors were found, isa taskId. We can use this to gauge how the job’s performing. Let’s do that now.

Job Checking

Ok, so the next part can be a bit tricky depending on your language of choice. We need to use the task status endpoint to determine how the job is performing. How often we do this, how quickly and so forth, will depend on your platform and needs. For our little sample script here, everything is running at once. I wrote a function that will check the status. If the job isn’t finished (whether successful or not), it pauses briefly before trying again. While this approach isn’t the most sophisticated, it should work well enough for basic testing:

def checkTask(task, id, secret):

    headers = {
        "client_id":id,
        "client_secret":secret,
        "Content-Type":"application/json"
    }

    done = False
    while done is False:

        request = requests.get(f"{HOST}/pdf-services/api/tasks/{task}", headers=headers)
        status = request.json()
        if status["status"] == "COMPLETED":
            done = True
            # really only need resultDocumentId, will address later
            return status
        elif status["status"] == "FAILED":
            print("Failure. Here is the last status:")
            print(status)
            sys.exit()
        else:
            print(f"Current status, {status['status']}, percentage: {status['progress']}")
            sleep(5)

As you can see, I’m using a while loop that—at least in theory—will continue running until a success or failure response is returned, with a five-second pause between each call. You can adjust that interval as needed—test different values to see what works best for your use case. Typically, most API calls should complete in under ten seconds, so a five-second delay felt like a reasonable default.

Each call to the endpoint returns a task status result. Here’s an example:

{
    'taskId': '685abc95a0d113558e4204d7', 
    'status': 'COMPLETED', 
    'progress': 100, 
    'resultDocumentId': '685abc952475582770d6917b'
}

The important part here is the status. But you could also use progress to give some feedback to the code waiting for results. Here’s my code calling this:

result = checkTask(task["taskId"], CLIENT_ID, CLIENT_SECRET)
print(f"Final result: {result}")

Downloading Your Result

The last piece of the puzzle is simply saving the result. If you noticed above, the task returned a resultDocumentId value. Taking that, and the [Download Document](NEED LINK) endpoint, we can build a utility to store the result like so:

def downloadResult(doc, path, id, secret):
    
    headers = {
        "client_id":id,
        "client_secret":secret
    }

    with open(path, "wb") as output:
        
        bits = requests.get(f"{HOST}/pdf-services/api/documents/{doc}/download", stream=True, headers=headers).content 
        output.write(bits)

And finally, call it:

downloadResult(result["resultDocumentId"], "../../output/input.pdf", CLIENT_ID, CLIENT_SECRET)
print("Done and saved to: ../../output/input.pdf")

And that’s it! While this script could certainly benefit from more robust error handling, it demonstrates the basic flow. As mentioned, most of our APIs follow this same logic.

Next Steps

Want the complete scripts? Get it on GitHub.

Want it in Node.js? Get it on GitHub.

Rather try this yourself? Sign up for a free developer account now. Need help? Head over to our developer forums and post your questions and comments.