Generated: 2025-06-08T21:39:18Z --- START OF FILE: docs/build/ao-connect.md --- # HyperBEAM from AO Connect This guide explains how to interact with a process using HyperBEAM and `aoconnect`. ## Prerequisites - Node.js environment - `@permaweb/aoconnect` library - The latest version of `aos` - Wallet file (`wallet.json`) containing your cryptographic keys - A `HyperBEAM` node running with the `genesis_wasm` profile (`rebar3 as genesis_wasm shell` from your HyperBEAM directory) - The Process ID for a process created with `genesis_wasm` (this is the default in the latest version of `aos`). ## Step 1: Environment Setup Install necessary dependencies: ```bash npm install @permaweb/aoconnect ``` Ensure your wallet file (`wallet.json`) is correctly formatted and placed in your project directory. > NOTE: you can create a test wallet using this line: > `npx -y @permaweb/wallet > wallet.json` ## Step 2: Establish Connection Create a new JavaScript file (e.g., `index.js`) and set up your Permaweb connection. You will need a `processId` of a process that you want to interact with. ```javascript import { connect, createSigner } from '@permaweb/aoconnect' import fs from 'node:fs' const jwk = JSON.parse(fs.readFileSync('wallet.json', 'utf-8')) // The Process ID to interact with const processId = ""; const { request } = connect({ MODE: 'mainnet', URL: 'http://localhost:8734', signer: createSigner(jwk) }) ``` ## Step 3: Pushing a Message to a Process Use the `request` function to send a message to the process. In `aoconnect`, this is done by using the `push` hyperpath. ```javascript const processResult = await request({ path: `/${processId}~process@1.0/push/serialize~json@1.0`, method: 'POST', target: processId, "signingFormat": "ANS-104" }) console.log(processResult) ``` ## Full Example To run the full script, combine the snippets from Step 2 and 3 into `index.js`: ```javascript import { connect, createSigner } from '@permaweb/aoconnect' import fs from 'node:fs' const jwk = JSON.parse(fs.readFileSync('wallet.json', 'utf-8')) const processId = ""; const { request } = connect({ MODE: 'mainnet', URL: 'http://localhost:8734', signer: createSigner(jwk) }) const processResult = await request({ path: `/${processId}~process@1.0/push/serialize~json@1.0`, method: 'POST', target: processId, "signingFormat": "ANS-104" }) console.log(processResult) ``` Now, run it: ```bash node index.js ``` You should see an object logged to the console, containing the ID of the message that was sent. ## Conclusion Following these steps, you've successfully sent a message to a process. This is a fundamental interaction for building applications on hyperAOS. --- END OF FILE: docs/build/ao-connect.md --- --- START OF FILE: docs/build/building-devices.md --- # Extending HyperBEAM with Devices We encourage you to extend HyperBEAM with devices for functionality that is general purpose and reusable across different applications. ## What are Devices? As explained in the [introduction](../introduction/hyperbeam-devices.md), devices are the core functional units within HyperBEAM. They are self-contained modules that process messages and perform specific actions, forming the building blocks of your application's logic. HyperBEAM comes with a set of powerful [built-in devices](../devices/what-are-devices.md) that handle everything from process management (`~process@1.0`) and message scheduling (`~scheduler@1.0`) to executing WebAssembly (`~wasm64@1.0`) and Lua scripts (`~lua@5.3a`). ## Creating Your Own Devices (Coming Soon) We will create more in depth guides for building devices in Lua and Erlang in the future. --- END OF FILE: docs/build/building-devices.md --- --- START OF FILE: docs/build/extending-hyperbeam.md --- # Extending HyperBEAM HyperBEAM's modular design, built on AO-Core principles and Erlang/OTP, makes it highly extensible. You can add new functionalities or modify existing behaviors primarily by creating new **Devices** or implementing **Pre/Post-Processors**. !!! warning "Advanced Topic" Extending HyperBEAM requires a good understanding of Erlang/OTP, the AO-Core protocol, and HyperBEAM's internal architecture. This guide provides a high-level overview; detailed implementation requires deeper exploration of the source code. ## Approach 1: Creating New Devices This is the most common way to add significant new capabilities. A Device is essentially an Erlang module (typically named `dev_*.erl`) that processes AO-Core messages. **Steps:** 1. **Define Purpose:** Clearly define what your device will do. What kind of messages will it process? What state will it manage (if any)? What functions (keys) will it expose? 2. **Create Module:** Create a new Erlang module (e.g., `src/dev_my_new_device.erl`). 3. **Implement `info/0..2` (Optional but Recommended):** Define an `info` function to signal capabilities and requirements to HyperBEAM (e.g., exported keys, variant/version ID). ```erlang info() -> #{ variant => <<"MyNewDevice/1.0">>, exports => [<<"do_something">>, <<"get_status">>] }. ``` 4. **Implement Key Functions:** Create Erlang functions corresponding to the keys your device exposes. These functions typically take `StateMessage`, `InputMessage`, and `Environment` as arguments and return `{ok, NewMessage}` or `{error, Reason}`. ```erlang do_something(StateMsg, InputMsg, Env) -> % ... perform action based on InputMsg ... NewState = ..., % Calculate new state {ok, NewState}. get_status(StateMsg, _InputMsg, _Env) -> % ... read status from StateMsg ... StatusData = ..., {ok, StatusData}. ``` 5. **Handle State (If Applicable):** Devices can be stateless or stateful. Stateful devices manage their state within the `StateMessage` passed between function calls. 6. **Register Device:** Ensure HyperBEAM knows about your device. This might involve adding it to build configurations or potentially a dynamic registration mechanism if available. 7. **Testing:** Write EUnit tests for your device's functions. **Example Idea:** A device that bridges to another blockchain network, allowing AO processes to read data or trigger transactions on that chain. ## Approach 2: Building Pre/Post-Processors Pre/post-processors allow you to intercept incoming requests *before* they reach the target device/process (`preprocess`) or modify the response *after* execution (`postprocess`). These are often implemented using the `dev_stack` device or specific hooks within the request handling pipeline. **Use Cases:** * **Authentication/Authorization:** Checking signatures or permissions before allowing execution. * **Request Modification:** Rewriting requests, adding metadata, or routing based on specific criteria. * **Response Formatting:** Changing the structure or content type of the response. * **Metering/Logging:** Recording request details or charging for usage before or after execution. **Implementation:** Processors often involve checking specific conditions (like request path or headers) and then either: a. Passing the request through unchanged. b. Modifying the request/response message structure. c. Returning an error or redirect. **Example Idea:** A preprocessor that automatically adds a timestamp tag to all incoming messages for a specific process. ## Approach 3: Custom Routing Strategies While `dev_router` provides basic strategies (round-robin, etc.), you could potentially implement a custom load balancing or routing strategy module that `dev_router` could be configured to use. This would involve understanding the interfaces expected by `dev_router`. **Example Idea:** A routing strategy that queries worker nodes for their specific capabilities before forwarding a request. ## Getting Started 1. **Familiarize Yourself:** Deeply understand Erlang/OTP and the HyperBEAM codebase (`src/` directory), especially [`hb_ao.erl`](../resources/source-code/hb_ao.md), [`hb_message.erl`](../resources/source-code/hb_message.md), and existing `dev_*.erl` modules relevant to your idea. 2. **Study Examples:** Look at simple devices like `dev_patch.erl` or more complex ones like `dev_process.erl` to understand patterns. 3. **Start Small:** Implement a minimal version of your idea first. 4. **Test Rigorously:** Use `rebar3 eunit` extensively. 5. **Engage Community:** Ask questions in developer channels if you get stuck. Extending HyperBEAM allows you to tailor the AO network's capabilities to specific needs, contributing to its rich and evolving ecosystem. --- END OF FILE: docs/build/extending-hyperbeam.md --- --- START OF FILE: docs/build/migrating-from-legacynet.md --- # Migrating from `legacynet` to HyperBEAM HyperBEAM represents a significant evolution from the original `legacynet`, offering a more robust, performant, and feature-rich environment for running AO processes. If you have processes currently running on `legacynet`, migrating them to HyperBEAM is a crucial step to leverage these advancements. ## Why Migrate to HyperBEAM? HyperBEAM is not just an update; it's a new foundation designed for building high-performance decentralized applications. Key benefits include: * **Enhanced Performance:** Built on Erlang/OTP, HyperBEAM's architecture is optimized for concurrency and fault tolerance, resulting in faster scheduling and more responsive applications. * **Powerful Developer Tools:** HyperBEAM exposes all of it's state through HTTP, you can use any standard HTTP library to interact with it. * **Easy Extensibility:** It allows core feature extensibility through [modular devices](../introduction/hyperbeam-devices.md). The process of migration involves updating your process to take advantage of the new features available in HyperBEAM. One of the most impactful new features is the ability to directly expose parts of your process state for immediate reading via HTTP, which dramatically improves the performance of web frontends and data-driven services. ## Exposing Process State with the Patch Device The [`~patch@1.0`](../resources/source-code/dev_patch.md) device provides a mechanism for AO processes to expose parts of their internal state, making it readable via direct HTTP GET requests along the process's HyperPATH. ### Why Use the Patch Device? Standard AO process execution typically involves sending a message to a process, letting it compute, and then potentially reading results from its outbox or state after the computation is scheduled and finished. This is asynchronous. The `patch` device allows for a more direct, synchronous-like read pattern. A process can use it to "patch" specific data elements from its internal state into a location that becomes directly accessible via a HyperPATH GET request *before* the full asynchronous scheduling might complete. This is particularly useful for: * **Web Interfaces:** Building frontends that need to quickly read specific data points from an AO process without waiting for a full message round-trip. * **Data Feeds:** Exposing specific metrics or state variables for monitoring or integration with other systems. * **Caching:** Allowing frequently accessed data to be retrieved efficiently via simple HTTP GETs. ### How it Works 1. **Process Logic:** Inside your AO process code (e.g., in Lua or WASM), when you want to expose data, you construct an *outbound message* targeted at the [`~patch@1.0`](../resources/source-code/dev_patch.md) device. 2. **Patch Message Format:** This outbound message typically includes tags that specify: * `device = 'patch@1.0'` * A `cache` tag containing a table. The **keys** within this table become the final segments in the HyperPATH used to access the data, and the **values** are the data itself. * Example Lua using `aos`: `Send({ Target = ao.id, device = 'patch@1.0', cache = { mydatakey = MyValue } })` 3. **HyperBEAM Execution:** When HyperBEAM executes the process schedule and encounters this outbound message: * It invokes the `dev_patch` module. * `dev_patch` inspects the message. * It takes the keys from the `cache` table (`mydatakey` in the example) and their associated values (`MyValue`) and makes these values available under the `/cache/` path segment. 4. **HTTP Access:** You (or any HTTP client) can now access this data directly using a GET request: ``` GET /~process@1.0/compute/cache/ # Or potentially using /now/ GET /~process@1.0/now/cache/ ``` The HyperBEAM node serving the request will resolve the path up to `/compute/cache` (or `/now/cache`), then use the logic associated with the patched data (`mydatakey`) to return the `MyValue` directly. ### Initial State Sync (Optional) It can be beneficial to expose the initial state of your process via the `patch` device as soon as the process is loaded or spawned. This makes key data points immediately accessible via HTTP GET requests without requiring an initial interaction message to trigger a `Send` to the patch device. This pattern typically involves checking a flag within your process state to ensure the initial sync only happens once. Here's an example from the Token Blueprint, demonstrating how to sync `Balances` and `TotalSupply` right after the process starts: ```lua -- Place this logic at the top level of your process script, -- outside of specific handlers, so it runs on load. -- Initialize the sync flag if it doesn't exist InitialSync = InitialSync or 'INCOMPLETE' -- Sync state on spawn/load if not already done if InitialSync == 'INCOMPLETE' then -- Send the relevant state variables to the patch device Send({ device = 'patch@1.0', cache = { balances = Balances, totalsupply = TotalSupply } }) -- Update the flag to prevent re-syncing on subsequent executions InitialSync = 'COMPLETE' print("Initial state sync complete. Balances and TotalSupply patched.") end ``` **Explanation:** 1. `InitialSync = InitialSync or 'INCOMPLETE'`: This line ensures the `InitialSync` variable exists in the process state, initializing it to `'INCOMPLETE'` if it's the first time the code runs. 2. `if InitialSync == 'INCOMPLETE' then`: The code proceeds only if the initial sync hasn't been marked as complete. 3. `Send(...)`: The relevant state (`Balances`, `TotalSupply`) is sent to the `patch` device, making it available under `/cache/balances` and `/cache/totalsupply`. 4. `InitialSync = 'COMPLETE'`: The flag is updated, so this block won't execute again in future message handlers within the same process lifecycle. This ensures that clients or frontends can immediately query essential data like token balances as soon as the process ID is known, improving the responsiveness of applications built on AO. ### Example (Lua in `aos`) ```lua -- In your process code (e.g., loaded via .load) Handlers.add( "PublishData", Handlers.utils.hasMatchingTag("Action", "PublishData"), function (msg) local dataToPublish = "Some important state: " .. math.random() -- Expose 'currentstatus' key under the 'cache' path Send({ device = 'patch@1.0', cache = { currentstatus = dataToPublish } }) print("Published data to /cache/currentstatus") end ) ``` ```bash -- Spawning and interacting default@aos-2.0.6> MyProcess = spawn(MyModule) default@aos-2.0.6> Send({ Target = MyProcess, Action = "PublishData" }) -- Wait a moment for scheduling ``` ### Avoiding Key Conflicts When defining keys within the `cache` table (e.g., `cache = { mydatakey = MyValue }`), these keys become path segments under `/cache/` (e.g., `/compute/cache/mydatakey` or `/now/cache/mydatakey`). It's important to choose keys that do not conflict with existing, reserved path segments used by HyperBEAM or the `~process` device itself for state access. Using reserved keywords as your cache keys can lead to routing conflicts or prevent you from accessing your patched data as expected. While the exact list can depend on device implementations, it's wise to avoid keys commonly associated with state access, such as: `now`, `compute`, `state`, `info`, `test`. It's recommended to use descriptive and specific keys for your cached data to prevent clashes with the underlying HyperPATH routing mechanisms. For example, instead of `cache = { state = ... }`, prefer `cache = { myappstate = ... }` or `cache = { usercount = ... }`. !!! warning Be aware that HTTP path resolution is case-insensitive and automatically normalizes paths to lowercase. While the `patch` device itself stores keys with case sensitivity (e.g., distinguishing `MyKey` from `mykey`), accessing them via an HTTP GET request will treat `/cache/MyKey` and `/cache/mykey` as the same path. This means that using keys that only differ in case (like `MyKey` and `mykey` in your `cache` table) will result in unpredictable behavior or data overwrites when accessed via HyperPATH. To prevent these issues, it is **strongly recommended** to use **consistently lowercase keys** within the `cache` table (e.g., `mykey`, `usercount`, `appstate`). ## Key Points * **Path Structure:** The data is exposed under the `/cache/` path segment. The tag name you use *inside* the `cache` table in the `Send` call (e.g., `currentstatus`) becomes the final segment in the accessible HyperPATH (e.g., `/compute/cache/currentstatus`). * **Data Types:** The `patch` device typically handles basic data types (strings, numbers) within the `cache` table effectively. Complex nested tables might require specific encoding or handling. * **`compute` vs `now`:** Accessing patched data via `/compute/cache/...` typically serves the last known patched value quickly. Accessing via `/now/cache/...` might involve more computation to ensure the absolute latest state before checking for the patched key under `/cache/`. * **Not a Replacement for State:** Patching is primarily for *exposing* reads. It doesn't replace the core state management within your process handler logic. By using the `patch` device, you can make parts of your AO process state easily and efficiently readable over standard HTTP, bridging the gap between decentralized computation and web-based applications. --- END OF FILE: docs/build/migrating-from-legacynet.md --- --- START OF FILE: docs/build/quick-start.md --- # Quick Start with HyperBEAM Welcome to building on HyperBEAM, the decentralized operating system built on AO. HyperBEAM leverages the permanent storage of Arweave with the flexible, scalable computation enabled by the AO-Core protocol and its HyperBEAM implementation. This allows you to create truly autonomous applications, agents, and services that run trustlessly and permissionlessly. ## Thinking in HyperBEAM Your serverless function can be a simple Lua script, or it can be a more complex WASM module. It will be deployed as a process on HyperBEAM whose state is stored on Arweave and is cached on HyperBEAM nodes. This gives you both benefits: permanence and speed. At its heart, building on HyperBEAM involves: 1. **Processes:** Think of these as independent programs or stateful contracts. Each process has a unique ID and maintains its own state. 2. **Messages:** You interact with processes by sending them messages. These messages trigger computations, update state, or cause the process to interact with other processes or the outside world. Messages are processed by [Devices](../introduction/ao-devices.md), which define *how* the computation happens (e.g., running WASM code, executing Lua scripts, managing state transitions). ## Starting `aos`: Your Development Environment The primary tool for interacting with AO and developing processes is `aos`, a command-line interface and development environment. === "npm" ```bash npm i -g https://get_ao.arweave.net ``` === "bun" ```bash bun install -g https://get_ao.arweave.net ``` === "pnpm" ```bash pnpm add -g https://get_ao.arweave.net ``` While you don't need to run a HyperBEAM node yourself, you do need to connect to one to interact with the network during development. To start `aos`, simply run the command in your terminal: ```bash aos --mainnet "https://dev-router.forward.computer" myMainnetProcess ``` This connects you to an interactive Lua environment running within a **process** on the AO network. This process acts as your command-line interface (CLI) to the AO network, allowing you to interact with other processes, manage your wallet, and develop new AO processes. When you specify `--mainnet `, it connects to the `genesis_wasm` device running on the HyperBEAM node at the supplied URL. !!! note **What `aos` is doing:** * **Connecting:** Establishes a connection from your terminal to a remote process running the `aos` environment. * **Loading Wallet:** Looks for a default Arweave key file (usually `~/.aos.json` or specified via arguments) to load into the remote process context for signing outgoing messages. * **Providing Interface:** Gives you a Lua prompt (`default@aos-2.0.6>`) within the remote process where you can: * Load code for new persistent processes on the network. * Send messages to existing network processes. * Inspect process state. * Manage your local environment. ## Initializing a Variable From the `aos` prompt, you can assign a variable. Let's assign a basic Lua process that just holds some data: ```lua default@aos-2.0.6> myVariable = "Hello from aos!" ``` This assigns the string "Hello from aos!" to the variable `myVariable` within the current process's Lua environment. ```lua default@aos-2.0.6> myVariable Hello from aos! ``` This displays the content of `myVariable`. ## Sending Your First Message Let's send our variable to another process. ```lua default@aos-2.0.6> Send({ Target = ao.id, Data = myVariable }) ``` You should see the following output: ```lua New Message From : Data = Hello from aos! ``` ## Creating Your First Handler Handlers are decentralized functions that can be triggered by messages. Follow these steps to create and interact with your first message handler in AO: 1. **Create a Lua File to Handle Messages:** Create a new file named `main.lua` in your local directory and add the following Lua code: ```lua Handlers.add( "HelloWorld", function(msg) print("Handler triggered by message from: " .. msg.From) msg.reply({ Data = "Hello back from your process!" }) end ) print("HelloWorld handler loaded.") ``` * ` --- END OF FILE: docs/build/quick-start.md --- --- START OF FILE: docs/build/serverless-decentralized-compute.md --- # Serverless Decentralized Compute on AO AO enables powerful "serverless" computation patterns by allowing you to run code (WASM, Lua) directly within decentralized processes, triggered by messages. Furthermore, if computations are performed on nodes running in Trusted Execution Environments (TEEs), you can obtain cryptographic attestations verifying the execution integrity. ## Core Concept: Compute Inside Processes Instead of deploying code to centralized servers, you deploy code *to* the Arweave permaweb and instantiate it as an AO process. Interactions happen by sending messages to this process ID. * **Code Deployment:** Your WASM binary or Lua script is uploaded to Arweave, getting a permanent transaction ID. * **Process Spawning:** You create an AO process, associating it with your code's transaction ID and specifying the appropriate compute device ([`~wasm64@1.0`](../devices/wasm64-at-1-0.md) or [`~lua@5.3a`](../devices/lua-at-5-3a.md)). * **Execution via Messages:** Sending a message to the process ID triggers the HyperBEAM node (that picks up the message) to: 1. Load the process state. 2. Fetch the associated WASM/Lua code from Arweave. 3. Execute the code using the relevant device ([`dev_wasm`](../resources/source-code/dev_wasm.md) or [`dev_lua`](../resources/source-code/dev_lua.md)), passing the message data and current state. 4. Update the process state based on the execution results. ## TEE Attestations (via [`~snp@1.0`](../resources/source-code/dev_snp.md)) If a HyperBEAM node performing these computations runs within a supported Trusted Execution Environment (like AMD SEV-SNP), it can provide cryptographic proof of execution. * **How it works:** The [`~snp@1.0`](../resources/source-code/dev_snp.md) device interacts with the TEE hardware. * **Signed Responses:** When a TEE-enabled node processes your message (e.g., executes your WASM function), the HTTP response containing the result can be cryptographically signed by a key that *provably* only exists inside the TEE. * **Verification:** Clients receiving this response can verify the signature against the TEE platform's attestation mechanism (e.g., AMD's KDS) to gain high confidence that the computation was performed correctly and confidentially within the secure environment, untampered by the node operator. **Obtaining Attested Responses:** This usually involves interacting with nodes specifically advertised as TEE-enabled. The exact mechanism for requesting and verifying attestations depends on the specific TEE technology and node configuration. * The HTTP response headers might contain specific signature or attestation data (e.g., using HTTP Message Signatures RFC-9421 via [`dev_codec_httpsig`](../resources/source-code/dev_codec_httpsig.md)). * You might query the [`~snp@1.0`](../resources/source-code/dev_snp.md) device directly on the node to get its attestation report. Refer to documentation on [TEE Nodes](../run/tee-nodes.md) and the [`~snp@1.0`](../resources/source-code/dev_snp.md) device for details. By leveraging WASM, Lua, and optional TEE attestations, AO provides a powerful platform for building complex, verifiable, and truly decentralized serverless applications. --- END OF FILE: docs/build/serverless-decentralized-compute.md --- --- START OF FILE: docs/devices/json-at-1-0.md --- # Device: ~json@1.0 ## Overview The [`~json@1.0`](../resources/source-code/dev_json_iface.md) device provides a mechanism to interact with JSON (JavaScript Object Notation) data structures using HyperPATHs. It allows treating a JSON document or string as a stateful entity against which HyperPATH queries can be executed. This device is useful for: * Serializing and deserializing JSON data. * Querying and modifying JSON objects. * Integrating with other devices and operations via HyperPATH chaining. ## Core Functions (Keys) ### Serialization * **`GET /~json@1.0/serialize` (Direct Serialize Action)** * **Action:** Serializes the input message or data into a JSON string. * **Example:** `GET /~json@1.0/serialize` - serializes the current message as JSON. * **HyperPATH:** The path segment `/serialize` directly follows the device identifier. * **`GET //~json@1.0/serialize` (Chained Serialize Action)** * **Action:** Takes arbitrary data output from `` (another device or operation) and returns its serialized JSON string representation. * **Example:** `GET /~meta@1.0/info/~json@1.0/serialize` - fetches node info from the meta device and then pipes it to the JSON device to serialize the result as JSON. * **HyperPATH:** This segment (`/~json@1.0/serialize`) is appended to a previous HyperPATH segment. ## HyperPATH Chaining Example The JSON device is particularly useful in HyperPATH chains to convert output from other devices into JSON format: ``` GET /~meta@1.0/info/~json@1.0/serialize ``` This retrieves the node configuration from the meta device and serializes it to JSON. ## See Also - [Message Device](../resources/source-code/dev_message.md) - Works well with JSON serialization - [Meta Device](../resources/source-code/dev_meta.md) - Can provide configuration data to serialize [json module](../resources/source-code/dev_codec_json.md) --- END OF FILE: docs/devices/json-at-1-0.md --- --- START OF FILE: docs/devices/lua-at-5-3a.md --- # Device: ~lua@5.3a ## Overview The [`~lua@5.3a`](../resources/source-code/dev_lua.md) device enables the execution of Lua scripts within the HyperBEAM environment. It provides an isolated sandbox where Lua code can process incoming messages, interact with other devices, and manage state. ## Core Concept: Lua Script Execution This device allows processes to perform computations defined in Lua scripts. Similar to the [`~wasm64@1.0`](../resources/source-code/dev_wasm.md) device, it manages the lifecycle of a Lua execution state associated with the process. ## Key Functions (Keys) These keys are typically used within an execution stack (managed by [`dev_stack`](../resources/source-code/dev_stack.md)) for an AO process. * **`init`** * **Action:** Initializes the Lua environment for the process. It finds and loads the Lua script(s) associated with the process, creates a `luerl` state, applies sandboxing rules if specified, installs the [`dev_lua_lib`](../resources/source-code/dev_lua_lib.md) (providing AO-specific functions like `ao.send`), and stores the initialized state in the process's private area (`priv/state`). * **Inputs (Expected in Process Definition or `init` Message):** * `script`: Can be: * An Arweave Transaction ID of the Lua script file. * A list of script IDs or script message maps. * A message map containing the Lua script in its `body` tag (Content-Type `application/lua` or `text/x-lua`). * A map where keys are module names and values are script IDs/messages. * `sandbox`: (Optional) Controls Lua sandboxing. Can be `true` (uses default sandbox list), `false` (no sandbox), or a map/list specifying functions to disable and their return values. * **Outputs (Stored in `priv/`):** * `state`: The initialized `luerl` state handle. * **`` (Default Handler - `compute`)** * **Action:** Executes a specific function within the loaded Lua script(s). This is the default handler; if a key matching a Lua function name is called on the device, this logic runs. * **Inputs (Expected in Process State or Incoming Message):** * `priv/state`: The Lua state obtained during `init`. * The **key** being accessed (used as the default function name). * `function` or `body/function`: (Optional) Overrides the function name derived from the key. * `parameters` or `body/parameters`: (Optional) Arguments to pass to the Lua function. Defaults to a list containing the process message, the request message, and an empty options map. * **Response:** The results returned by the Lua function call, typically encoded. The device also updates the `priv/state` with the Lua state after execution. * **`snapshot`** * **Action:** Captures the current state of the running Lua environment. `luerl` state is serializable. * **Inputs:** `priv/state`. * **Outputs:** A message containing the serialized Lua state, typically tagged with `[Prefix]/State`. * **`normalize` (Internal Helper)** * **Action:** Ensures a consistent state representation by loading a Lua state from a snapshot (`[Prefix]/State`) if a live state (`priv/state`) isn't already present. * **`functions`** * **Action:** Returns a list of all globally defined functions within the current Lua state. * **Inputs:** `priv/state`. * **Response:** A list of function names. ## Sandboxing The `sandbox` option in the process definition restricts potentially harmful Lua functions (like file I/O, OS commands, loading arbitrary code). By default (`sandbox = true`), common dangerous functions are disabled. You can customize the sandbox rules. ## AO Library (`dev_lua_lib`) The `init` function automatically installs a helper library ([`dev_lua_lib`](../resources/source-code/dev_lua_lib.md)) into the Lua state. This library typically provides functions for interacting with the AO environment from within the Lua script, such as: * `ao.send({ Target = ..., ... })`: To send messages from the process. * Access to message tags and data. ## Usage within `dev_stack` Like [`~wasm64@1.0`](../resources/source-code/dev_wasm.md), the `~lua@5.3a` device is typically used within an execution stack. ```text # Example Process Definition Snippet Execution-Device: stack@1.0 Execution-Stack: scheduler@1.0, lua@5.3a Script: Sandbox: true ``` This device offers a lightweight, integrated scripting capability for AO processes, suitable for a wide range of tasks from simple logic to more complex state management and interactions. [lua module](../resources/source-code/dev_lua.md) --- END OF FILE: docs/devices/lua-at-5-3a.md --- --- START OF FILE: docs/devices/message-at-1-0.md --- # Device: ~message@1.0 ## Overview The [`~message@1.0`](../resources/source-code/dev_message.md) device is a fundamental built-in device in HyperBEAM. It serves as the identity device for standard AO-Core messages, which are represented as Erlang maps internally. Its primary function is to allow manipulation and inspection of these message maps directly via HyperPATH requests, without needing a persistent process state. This device is particularly useful for: * Creating and modifying transient messages on the fly using query parameters. * Retrieving specific values from a message map. * Inspecting the keys of a message. * Handling message commitments and verification (though often delegated to specialized commitment devices like [`httpsig@1.0`](../resources/source-code/dev_codec_httpsig.md)). ## Core Functionality The `message@1.0` device treats the message itself as the state it operates on. Key operations are accessed via path segments in the HyperPATH. ### Key Access (`/key`) To retrieve the value associated with a specific key in the message map, simply append the key name to the path. Key lookup is case-insensitive. **Example:** ``` GET /~message@1.0&hello=world&Key=Value/key ``` **Response:** ``` "Value" ``` ### Reserved Keys The `message@1.0` device reserves several keys for specific operations: * **`get`**: (Default operation if path segment matches a key in the map) Retrieves the value of a specified key. Behaves identically to accessing `/key` directly. * **`set`**: Modifies the message by adding or updating key-value pairs. Requires additional parameters (usually in the request body or subsequent path segments/query params, depending on implementation specifics). * Supports deep merging of maps. * Setting a key to `unset` removes it. * Overwriting keys that are part of existing commitments will typically remove those commitments unless the new value matches the old one. * **`set_path`**: A special case for setting the `path` key itself, which cannot be done via the standard `set` operation. * **`remove`**: Removes one or more specified keys from the message. Requires an `item` or `items` parameter. * **`keys`**: Returns a list of all public (non-private) keys present in the message map. * **`id`**: Calculates and returns the ID (hash) of the message. Considers active commitments based on specified `committers`. May delegate ID calculation to a device specified by the message's `id-device` key * **`commit`**: Creates a commitment (e.g., a signature) for the message. Requires parameters like `commitment-device` and potentially committer information. Delegates the actual commitment generation to the specified device (default [`httpsig@1.0`](../resources/source-code/dev_codec_httpsig.md)). * **`committers`**: Returns a list of committers associated with the commitments in the message. Can be filtered by request parameters. * **`commitments`**: Used internally and in requests to filter or specify which commitments to operate on (e.g., for `id` or `verify`). * **`verify`**: Verifies the commitments attached to the message. Can be filtered by `committers` or specific `commitment` IDs in the request. Delegates verification to the device specified in each commitment (`commitment-device`). ### Private Keys Keys prefixed with `priv` (e.g., `priv_key`, `private.data`) are considered private and cannot be accessed or listed via standard `get` or `keys` operations. ## HyperPATH Example This example demonstrates creating a transient message and retrieving a value: ``` GET /~message@1.0&hello=world&k=v/k ``` **Breakdown:** 1. `~message@1.0`: Sets the root device. 2. `&hello=world&k=v`: Query parameters create the initial message: `#{ <<"hello">> => <<"world">>, <<"k">> => <<"v">> }`. 3. `/k`: The path segment requests the value for the key `k`. **Response:** ``` "v" ``` --- END OF FILE: docs/devices/message-at-1-0.md --- --- START OF FILE: docs/devices/meta-at-1-0.md --- # Device: ~meta@1.0 ## Overview The [`~meta@1.0`](../resources/source-code/dev_meta.md) device provides access to metadata and configuration information about the local HyperBEAM node and the broader AO network. This device is essential for: ## Core Functions (Keys) ### `info` Retrieves or modifies the node's configuration message (often referred to as `NodeMsg` internally). * **`GET /~meta@1.0/info`** * **Action:** Returns the current node configuration message. * **Response:** A message map containing the node's settings. Sensitive keys (like private wallets) are filtered out. Dynamically generated keys like the node's public `address` are added if a wallet is configured. * **`POST /~meta@1.0/info`** * **Action:** Updates the node's configuration message. Requires the request to be signed by the node's configured `operator` key/address. * **Request Body:** A message map containing the configuration keys and values to update. * **Response:** Confirmation message indicating success or failure. * **Note:** Once a node's configuration is marked as `initialized = permanent`, it cannot be changed via this method. ## Key Configuration Parameters Managed by `~meta` While the `info` key is the primary interaction point, the `NodeMsg` managed by `~meta` holds crucial configuration parameters affecting the entire node's behavior, including (but not limited to): * `port`: HTTP server port. * `priv_wallet` / `key_location`: Path to the node's Arweave key file. * `operator`: The address designated as the node operator (defaults to the address derived from `priv_wallet`). * `initialized`: Status indicating if the node setup is temporary or permanent. * `preprocessor` / `postprocessor`: Optional messages defining pre/post-processing logic for requests. * `routes`: Routing table used by [`dev_router`](../resources/source-code/dev_router.md). * `store`: Configuration for data storage. * `trace`: Debug tracing options. * `p4_*`: Payment configuration. * `faff_*`: Access control lists. *(Refer to `hb_opts.erl` for a comprehensive list of options.)* ## Utility Functions (Internal/Module Level) The [`dev_meta.erl`](../resources/source-code/dev_meta.md) module also contains helper functions used internally or callable from other Erlang modules: * `is_operator(, ) -> boolean()`: Checks if the signer of `RequestMsg` matches the configured `operator` in `NodeMsg`. ## Pre/Post-Processing Hooks The `~meta` device applies the node's configured `preprocessor` message before resolving the main request and the `postprocessor` message after obtaining the result, allowing for global interception and modification of requests/responses. ## Initialization Before a node can process general requests, it usually needs to be initialized. Attempts to access devices other than `~meta@1.0/info` before initialization typically result in an error. Initialization often involves setting essential parameters like the operator key via a `POST` to `info`. [meta module](../resources/source-code/dev_meta.md) --- END OF FILE: docs/devices/meta-at-1-0.md --- --- START OF FILE: docs/devices/process-at-1-0.md --- # Device: ~process@1.0 ## Overview The [`~process@1.0`](../resources/source-code/dev_process.md) device represents a persistent, shared execution environment within HyperBEAM, analogous to a process or actor in other systems. It allows for stateful computation and interaction over time. ## Core Concept: Orchestration A message tagged with `Device: process@1.0` (the "Process Definition Message") doesn't typically perform computation itself. Instead, it defines *which other devices* should be used for key aspects of its lifecycle: * **Scheduler Device:** Determines the order of incoming messages (assignments) to be processed. (Defaults to [`~scheduler@1.0`](../resources/source-code/dev_scheduler.md)). * **Execution Device:** Executes the actual computation based on the current state and the scheduled message. Often configured as [`dev_stack`](../resources/source-code/dev_stack.md) to allow multiple computational steps (e.g., running WASM, applying cron jobs, handling proofs). * **Push Device:** Handles the injection of new messages into the process's schedule. (Defaults to [`~push@1.0`](../resources/source-code/dev_push.md)). The `~process@1.0` device acts as a router, intercepting requests and delegating them to the appropriate configured device (scheduler, executor, etc.) by temporarily swapping the device tag on the message before resolving. ## Key Functions (Keys) These keys are accessed via HyperPATHs relative to the Process Definition Message ID (``). * **`GET /~process@1.0/schedule`** * **Action:** Delegates to the configured Scheduler Device (via the process's `schedule/3` function) to retrieve the current schedule or state. * **Response:** Depends on the Scheduler Device implementation (e.g., list of message IDs). * **`POST /~process@1.0/schedule`** * **Action:** Delegates to the configured Push Device (via the process's `push/3` function) to add a new message to the process's schedule. * **Request Body:** The message to be added. * **Response:** Confirmation or result from the Push Device. * **`GET /~process@1.0/compute/`** * **Action:** Computes the process state up to a specific point identified by `` (either a slot number or a message ID within the schedule). It retrieves assignments from the Scheduler Device and applies them sequentially using the configured Execution Device. * **Response:** The process state message after executing up to the target slot/message. * **Caching:** Results are cached aggressively (see [`dev_process_cache`](../resources/source-code/dev_process_cache.md)) to avoid recomputation. * **`GET /~process@1.0/now`** * **Action:** Computes and returns the `Results` key from the *latest* known state of the process. This typically involves computing all pending assignments. * **Response:** The value of the `Results` key from the final state. * **`GET /~process@1.0/slot`** * **Action:** Delegates to the configured Scheduler Device to query information about a specific slot or the current slot number. * **Response:** Depends on the Scheduler Device implementation. * **`GET /~process@1.0/snapshot`** * **Action:** Delegates to the configured Execution Device to generate a snapshot of the current process state. This often involves running the execution stack in a specific "map" mode to gather state from different components. * **Response:** A message representing the process snapshot, often marked for caching. ## Process Definition Example A typical process definition message might look like this (represented conceptually): ```text Device: process@1.0 Scheduler-Device: [`scheduler@1.0`](../resources/source-code/dev_scheduler.md) Execution-Device: [`stack@1.0`](../resources/source-code/dev_stack.md) Execution-Stack: "[`scheduler@1.0`](../resources/source-code/dev_scheduler.md)", "[`cron@1.0`](../resources/source-code/dev_cron.md)", "[`wasm64@1.0`](../resources/source-code/dev_wasm.md)", "[`PoDA@1.0`](../resources/source-code/dev_poda.md)" Cron-Frequency: 10-Minutes WASM-Image: PoDA: Device: [`PoDA/1.0`](../resources/source-code/dev_poda.md) Authority: Authority: Quorum: 2 ``` This defines a process that uses: * The standard scheduler. * A stack executor that runs scheduling logic, cron jobs, a WASM module, and a Proof-of-Data-Availability check. ## State Management & Caching `~process@1.0` relies heavily on caching ([`dev_process_cache`](../resources/source-code/dev_process_cache.md)) to optimize performance. Full state snapshots and intermediate results are cached periodically (configurable via `Cache-Frequency` and `Cache-Keys` options) to avoid recomputing the entire history for every request. ## Initialization (`init`) Processes often require an initialization step before they can process messages. This is typically triggered by calling the `init` key on the configured Execution Device via the process path (`/~process@1.0/init`). This allows components within the execution stack (like WASM modules) to set up their initial state. [process module](../resources/source-code/dev_process.md) --- END OF FILE: docs/devices/process-at-1-0.md --- --- START OF FILE: docs/devices/relay-at-1-0.md --- # Device: ~relay@1.0 ## Overview The [`~relay@1.0`](../resources/source-code/dev_relay.md) device enables HyperBEAM nodes to send messages to external HTTP endpoints or other AO nodes. ## Core Concept: Message Forwarding This device acts as an HTTP client within the AO ecosystem. It allows a node or process to make outbound HTTP requests. ## Key Functions (Keys) * **`call`** * **Action:** Sends an HTTP request to a specified target and waits synchronously for the response. * **Inputs (from Request Message or Base Message M1):** * `target`: (Optional) A message map defining the request to be sent. Defaults to the original incoming request (`Msg2` or `M1`). * `relay-path` or `path`: The URL/path to send the request to. * `relay-method` or `method`: The HTTP method (GET, POST, etc.). * `relay-body` or `body`: The request body. * `requires-sign`: (Optional, boolean) If true, the request message (`target`) will be signed using the node's key before sending. Defaults to `false`. * `http-client`: (Optional) Specify a custom HTTP client module to use (defaults to node's configured `relay_http_client`). * **Response:** `{ok, }` where `` is the full message received from the remote peer, or `{error, Reason}`. * **Example HyperPATH:** ``` GET /~relay@1.0/call?method=GET&path=https://example.com ``` * **`cast`** * **Action:** Sends an HTTP request asynchronously. The device returns immediately after spawning a process to send the request; it does not wait for or return the response from the remote peer. * **Inputs:** Same as `call`. * **Response:** `{ok, <<"OK">>}`. * **`preprocess`** * **Action:** This function is designed to be used as a node's global `preprocessor` (configured via [`~meta@1.0`](../resources/source-code/dev_meta.md)). When configured, it intercepts *all* incoming requests to the node and automatically rewrites them to be relayed via the `call` key. This effectively turns the node into a pure forwarding proxy, using its routing table ([`dev_router`](../resources/source-code/dev_router.md)) to determine the destination. * **Response:** A message structure that invokes `/~relay@1.0/call` with the original request as the target body. ## Use Cases * **Inter-Node Communication:** Sending messages between HyperBEAM nodes. * **External API Calls:** Allowing AO processes to interact with traditional web APIs. * **Routing Nodes:** Nodes configured with the `preprocess` key act as dedicated routers/proxies. * **Client-Side Relaying:** A local HyperBEAM instance can use `~relay@1.0` to forward requests to public compute nodes. ## Interaction with Routing When `call` or `cast` is invoked, the actual HTTP request dispatch is handled by `hb_http:request/2`. This function often utilizes the node's routing configuration ([`dev_router`](../resources/source-code/dev_router.md)) to determine the specific peer/URL to send the request to, especially if the target path is an AO process ID or another internal identifier rather than a full external URL. [relay module](../resources/source-code/dev_relay.md) --- END OF FILE: docs/devices/relay-at-1-0.md --- --- START OF FILE: docs/devices/scheduler-at-1-0.md --- # Device: ~scheduler@1.0 ## Overview The [`~scheduler@1.0`](../resources/source-code/dev_scheduler.md) device manages the queueing and ordering of messages targeted at a specific process ([`~process@1.0`](../resources/source-code/dev_process.md)). It ensures that messages are processed according to defined scheduling rules. ## Core Concept: Message Ordering When messages are sent to an AO process (typically via the [`~push@1.0`](../resources/source-code/dev_push.md) device or a `POST` to the process's `/schedule` endpoint), they are added to a queue managed by the Scheduler Device associated with that process. The scheduler ensures that messages are processed one after another in a deterministic order, typically based on arrival time and potentially other factors like message nonces or timestamps (depending on the specific scheduler implementation details). The [`~process@1.0`](../resources/source-code/dev_process.md) device interacts with its configured Scheduler Device (which defaults to `~scheduler@1.0`) primarily through the `next` key to retrieve the next message to be executed. ## Slot System Slots are a fundamental concept in the `~scheduler@1.0` device, providing a structured mechanism for organizing and sequencing computation. * **Sequential Ordering:** Slots act as numbered containers (starting at 0) that hold specific messages or tasks to be processed in a deterministic order. * **State Tracking:** The `at-slot` key in a process's state (or a similar internal field like `current-slot` within the scheduler itself) tracks execution progress, indicating which messages have been processed and which are pending. The `slot` function can be used to query this. * **Assignment Storage:** Each slot contains an "assignment" - the cryptographically verified message waiting to be executed. These assignments are retrieved using the `schedule` function or internally via `next`. * **Schedule Organization:** The collection of all slots for a process forms its "schedule". * **Application Scenarios:** * **Scheduling Messages:** When a message is posted to a process (e.g., via `register`), it's assigned to the next available slot. * **Status Monitoring:** Clients can query a process's current slot (via the `slot` function) to check progress. * **Task Retrieval:** Processes find their next task by requesting the next assignment via the `next` function, which implicitly uses the next slot number based on the current state. * **Distributed Consistency:** Slots ensure deterministic execution order across nodes, crucial for maintaining consistency in AO. This slotting mechanism is central to AO processes built on HyperBEAM, allowing for deterministic, verifiable computation. ## Key Functions (Keys) These keys are typically accessed via the [`~process@1.0`](../resources/source-code/dev_process.md) device, which delegates the calls to its configured scheduler. * **`schedule` (Handler for `GET /~process@1.0/schedule`)** * **Action:** Retrieves the list of pending assignments (messages) for the process. May support cursor-based traversal for long schedules. * **Response:** A message map containing the assignments, often keyed by slot number or message ID. * **`register` (Handler for `POST /~process@1.0/schedule`)** * **Action:** Adds/registers a new message to the process's schedule. If this is the first message for a process, it might initialize the scheduler state. * **Request Body:** The message to schedule. * **Response:** Confirmation, potentially including the assigned slot or message ID. * **`slot` (Handler for `GET /~process@1.0/slot`)** * **Action:** Queries the current or a specific slot number within the process's schedule. * **Response:** Information about the requested slot, such as the current highest slot number. * **`status` (Handler for `GET /~process@1.0/status`)** * **Action:** Retrieves status information about the scheduler for the process. * **Response:** A status message. * **`next` (Internal Key used by [`~process@1.0`](../resources/source-code/dev_process.md))** * **Action:** Retrieves the next assignment message from the schedule based on the process's current `at-slot` state. * **State Management:** Requires the current process state (`Msg1`) containing the `at-slot` key. * **Response:** `{ok, #{ "body" => , "state" => }}` or `{error, Reason}` if no next assignment is found. * **Caching & Lookahead:** The implementation uses internal caching (`dev_scheduler_cache`, `priv/assignments`) and potentially background lookahead workers to optimize fetching subsequent assignments. * **`init` (Internal Key)** * **Action:** Initializes the scheduler state for a process, often called when the process itself is initialized. * **`checkpoint` (Internal Key)** * **Action:** Triggers the scheduler to potentially persist its current state or perform other checkpointing operations. ## Interaction with Other Components * [`~process@1.0`](../resources/source-code/dev_process.md): The primary user of the scheduler, calling `next` to drive process execution. * [`~push@1.0`](../resources/source-code/dev_push.md): Often used to add messages to the schedule via `POST /schedule`. * `dev_scheduler_cache`: Internal module used for caching assignments locally on the node to reduce latency. * Scheduling Unit (SU): Schedulers may interact with external entities (like Arweave gateways or dedicated SU nodes) to fetch or commit schedules, although `~scheduler@1.0` aims for a simpler, often node-local or SU-client model. `~scheduler@1.0` provides the fundamental mechanism for ordered, sequential execution within the potentially asynchronous and parallel environment of AO. [scheduler module](../resources/source-code/dev_scheduler.md) --- END OF FILE: docs/devices/scheduler-at-1-0.md --- --- START OF FILE: docs/devices/wasm64-at-1-0.md --- # Device: ~wasm64@1.0 ## Overview The [`~wasm64@1.0`](../resources/source-code/dev_wasm.md) device enables the execution of 64-bit WebAssembly (WASM) code within the HyperBEAM environment. It provides a sandboxed environment for running compiled code from various languages (like Rust, C++, Go) that target WASM. ## Core Concept: WASM Execution This device allows AO processes to perform complex computations defined in WASM modules, which can be written in languages like Rust, C++, C, Go, etc., and compiled to WASM. The device manages the lifecycle of a WASM instance associated with the process state. ## Key Functions (Keys) These keys are typically used within an execution stack (managed by [`dev_stack`](../resources/source-code/dev_stack.md)) for an AO process. * **`init`** * **Action:** Initializes the WASM environment for the process. It locates the WASM image (binary), starts a WAMR instance, and stores the instance handle and helper functions (for reading/writing WASM memory) in the process's private state (`priv/...`). * **Inputs (Expected in Process Definition or `init` Message):** * `[Prefix]/image`: The Arweave Transaction ID of the WASM binary, or the WASM binary itself, or a message containing the WASM binary in its body. * `[Prefix]/Mode`: (Optional) Specifies execution mode (`WASM` (default) or `AOT` if allowed by node config). * **Outputs (Stored in `priv/`):** * `[Prefix]/instance`: The handle to the running WAMR instance. * `[Prefix]/write`: A function to write data into the WASM instance's memory. * `[Prefix]/read`: A function to read data from the WASM instance's memory. * `[Prefix]/import-resolver`: A function used to handle calls *from* the WASM module back *to* the AO environment (imports). * **`compute`** * **Action:** Executes a function within the initialized WASM instance. It retrieves the target function name and parameters from the incoming message or process definition and calls the WASM instance via `hb_beamr`. * **Inputs (Expected in Process State or Incoming Message):** * `priv/[Prefix]/instance`: The handle obtained during `init`. * `function` or `body/function`: The name of the WASM function to call. * `parameters` or `body/parameters`: A list of parameters to pass to the WASM function. * **Outputs (Stored in `results/`):** * `results/[Prefix]/type`: The result type returned by the WASM function. * `results/[Prefix]/output`: The actual result value returned by the WASM function. * **`import`** * **Action:** Handles calls originating *from* the WASM module (imports). The default implementation (`default_import_resolver`) resolves these calls by treating them as sub-calls within the AO environment, allowing WASM code to invoke other AO device functions or access process state via the `hb_ao:resolve` mechanism. * **Inputs (Provided by `hb_beamr`):** Module name, function name, arguments, signature. * **Response:** Returns the result of the resolved AO call back to the WASM instance. * **`snapshot`** * **Action:** Captures the current memory state of the running WASM instance. This is used for checkpointing and restoring process state. * **Inputs:** `priv/[Prefix]/instance`. * **Outputs:** A message containing the raw binary snapshot of the WASM memory state, typically tagged with `[Prefix]/State`. * **`normalize` (Internal Helper)** * **Action:** Ensures a consistent state representation for computation, primarily by loading a WASM instance from a snapshot (`[Prefix]/State`) if a live instance (`priv/[Prefix]/instance`) isn't already present. This allows resuming execution from a cached state. * **`terminate`** * **Action:** Stops and cleans up the running WASM instance associated with the process. * **Inputs:** `priv/[Prefix]/instance`. ## Usage within `dev_stack` The `~wasm64@1.0` device is almost always used as part of an execution stack configured in the Process Definition Message and managed by [`dev_stack`](../resources/source-code/dev_stack.md). [`dev_stack`](../resources/source-code/dev_stack.md) ensures that `init` is called on the first pass, `compute` on subsequent passes, and potentially `snapshot` or `terminate` as needed. ```text # Example Process Definition Snippet Execution-Device: [`stack@1.0`](../resources/source-code/dev_stack.md) Execution-Stack: "[`scheduler@1.0`](../resources/source-code/dev_scheduler.md)", "wasm64@1.0" WASM-Image: ``` This setup allows AO processes to leverage the computational power and language flexibility offered by WebAssembly in a decentralized, verifiable manner. [wasm module](../resources/source-code/dev_wasm.md) --- END OF FILE: docs/devices/wasm64-at-1-0.md --- --- START OF FILE: docs/devices/what-are-devices.md --- # What are HyperBEAM Devices? Devices are the core functional units within HyperBEAM and AO-Core. They define how messages are processed and what actions can be performed. Each device listed here represents a specific capability available to AO processes and nodes. Understanding these devices is key to building complex applications and configuring your HyperBEAM node effectively. ## Available Devices Below is a list of documented built-in devices. Each page details the device's purpose, available functions (keys), and usage examples where applicable. * [`~message@1.0`](./message-at-1-0.md): Base message handling and manipulation. * [`~meta@1.0`](./meta-at-1-0.md): Node configuration and metadata. * [`~process@1.0`](./process-at-1-0.md): Persistent, shared process execution environment. * [`~scheduler@1.0`](./scheduler-at-1-0.md): Message scheduling and execution ordering for processes. * [`~wasm64@1.0`](./wasm64-at-1-0.md): WebAssembly (WASM) execution engine. * [`~lua@5.3a`](./lua-at-5-3a.md): Lua script execution engine. * [`~relay@1.0`](./relay-at-1-0.md): Relaying messages to other nodes or HTTP endpoints. * [`~json@1.0`](./json-at-1-0.md): Provides access to JSON data structures using HyperPATHs. There can exist many more devices, but these are a few of the ones that are built into HyperBEAM. ## Device Naming and Versioning Devices are typically referenced using a name and version, like `~@` (e.g., `~process@1.0`). The tilde (`~`) often indicates a primary, user-facing device, while internal or utility devices might use a `dev_` prefix in the source code (e.g., `dev_router`). Versioning indicates the specific interface and behavior of the device. Changes to a device that break backward compatibility usually result in a version increment. --- END OF FILE: docs/devices/what-are-devices.md --- --- START OF FILE: docs/introduction/hyperbeam-devices.md --- # AO Devices In AO-Core and its implementation HyperBEAM, **Devices** are modular components responsible for processing and interpreting [Messages](./what-is-ao-core.md#core-concepts). They define the specific logic for how computations are performed, data is handled, or interactions occur within the AO ecosystem. Think of Devices as specialized engines or services that can be plugged into the AO framework. This modularity is key to AO's flexibility and extensibility. ## Purpose of Devices * **Define Computation:** Devices dictate *how* a message's instructions are executed. One device might run WASM code, another might manage process state, and yet another might simply relay data. * **Enable Specialization:** Nodes running HyperBEAM can choose which Devices to support, allowing them to specialize in certain tasks (e.g., high-compute tasks, storage-focused tasks, secure TEE operations). * **Promote Modularity:** New functionalities can be added to AO by creating new Devices, without altering the core protocol. * **Distribute Workload:** Different Devices can handle different parts of a complex task, enabling parallel processing and efficient resource utilization across the network. ## Familiar Examples HyperBEAM includes many preloaded devices that provide core functionality. Some key examples include: * [`~meta@1.0`](../devices/meta-at-1-0.md): Configures the node itself (hardware specs, supported devices, payment info). * [`~process@1.0`](../devices/process-at-1-0.md): Manages persistent, shared computational states (like traditional smart contracts, but more flexible). * [`~scheduler@1.0`](../devices/scheduler-at-1-0.md): Handles the ordering and execution of messages within a process. * [`~wasm64@1.0`](../devices/wasm64-at-1-0.md): Executes WebAssembly (WASM) code, allowing for complex computations written in languages like Rust, C++, etc. * [`~lua@5.3a`](../devices/lua-at-5-3a.md): Executes Lua scripts. * [`~relay@1.0`](../devices/relay-at-1-0.md): Forwards messages between AO nodes or to external HTTP endpoints. * [`~json@1.0`](../devices/json-at-1-0.md): Provides access to JSON data structures using HyperPATHs. * [`~message@1.0`](../devices/message-at-1-0.md): Manages message state and processing. * [`~patch@1.0`](../guides/exposing-process-state.md): Applies state updates directly to a process, often used for migrating or managing process data. ## Beyond the Basics Devices aren't limited to just computation or state management. They can represent more abstract concepts: * **Security Devices** ([`~snp@1.0`](../resources/source-code/dev_snp.md), [`dev_codec_httpsig`](../resources/source-code/dev_codec_httpsig.md)): Handle tasks related to Trusted Execution Environments (TEEs) or message signing, adding layers of security and verification. * **Payment/Access Control Devices** ([`~p4@1.0`](../resources/source-code/dev_p4.md), [`~faff@1.0`](../resources/source-code/dev_faff.md)): Manage metering, billing, or access control for node services. * **Workflow/Utility Devices** ([`dev_cron`](../resources/source-code/dev_cron.md), [`dev_stack`](../resources/source-code/dev_stack.md), [`dev_monitor`](../resources/source-code/dev_monitor.md)): Coordinate complex execution flows, schedule tasks, or monitor process activity. ## Using Devices Devices are typically invoked via [HyperPATHs](./hyperpaths-in-hyperbeam.md). The path specifies which Device should interpret the subsequent parts of the path or the request body. ``` # Example: Execute the 'now' key on the process device for a specific process /~process@1.0/now # Example: Relay a GET request via the relay device /~relay@1.0/call?method=GET&path=https://example.com ``` The specific functions or 'keys' available for each Device are documented individually. See the [Devices section](../devices/index.md) for details on specific built-in devices. ## The Potential of Devices The modular nature of AO Devices opens up vast possibilities for future expansion and innovation. The current set of preloaded and community devices is just the beginning. As the AO ecosystem evolves, we can anticipate the development of new devices catering to increasingly specialized needs: * **Specialized Hardware Integration:** Devices could be created to interface directly with specialized hardware accelerators like GPUs (for AI/ML tasks such as running large language models), TPUs, or FPGAs, allowing AO processes to leverage high-performance computing resources securely and verifiably. * **Advanced Cryptography:** New devices could implement cutting-edge cryptographic techniques, such as zero-knowledge proofs (ZKPs) or fully homomorphic encryption (FHE), enabling enhanced privacy and complex computations on encrypted data. * **Cross-Chain & Off-Chain Bridges:** Devices could act as secure bridges to other blockchain networks or traditional Web2 APIs, facilitating seamless interoperability and data exchange between AO and the wider digital world. * **AI/ML Specific Devices:** Beyond raw GPU access, specialized devices could offer higher-level AI/ML functionalities, like optimized model inference engines or distributed training frameworks. * **Domain-Specific Logic:** Communities or organizations could develop devices tailored to specific industries or use cases, such as decentralized finance (DeFi) primitives, scientific computing libraries, or decentralized identity management systems. The Device framework ensures that AO can adapt and grow, incorporating new technologies and computational paradigms without requiring fundamental changes to the core protocol. This extensibility is key to AO's long-term vision of becoming a truly global, decentralized computer. --- END OF FILE: docs/introduction/hyperbeam-devices.md --- --- START OF FILE: docs/introduction/hyperpaths-in-hyperbeam.md --- # HyperPATHs in HyperBEAM ## Overview Understanding how to construct and interpret paths in AO-Core is fundamental to working with HyperBEAM. This guide explains the structure and components of AO-Core paths, enabling you to effectively interact with processes and access their data. ## HyperPATH Structure Let's examine a typical HyperBEAM endpoint piece-by-piece: ```bash https://dev-router.forward.computer/~process@1.0/now ``` ### Node URL (`router-1.forward.computer`) The HTTP response from this node includes a signature from the host's key. By accessing the [`~snp@1.0`](../resources/source-code/dev_snp.md) device, you can verify that the node is running in a genuine Trusted Execution Environment (TEE), ensuring computation integrity. You can replace `router-1.forward.computer` with any HyperBEAM TEE node operated by any party while maintaining trustless guarantees. ### Process Path (`/~process@1.0`) Every path in AO-Core represents a program. Think of the URL bar as a Unix-style command-line interface, providing access to AO's trustless and verifiable compute. Each path component (between `/` characters) represents a step in the computation. In this example, we instruct the AO-Core node to: 1. Load a specific message from its caches (local, another node, or Arweave) 2. Interpret it with the [`~process@1.0`](../devices/process-at-1-0.md) device 3. The process device implements a shared computing environment with consistent state between users ### State Access (`/now` or `/compute`) Devices in AO-Core expose keys accessible via path components. Each key executes a function on the device: - `now`: Calculates real-time process state - `compute`: Serves the latest known state (faster than checking for new messages) Under the surface, these keys represent AO-Core messages. As we progress through the path, AO-Core applies each message to the existing state. You can access the full process state by visiting: ```bash /~process@1.0/now ``` ### State Navigation You can browse through sub-messages and data fields by accessing them as keys. For example, if a process stores its interaction count in a field named `cache`, you can access it like this: ```bash /~process@1.0/compute/cache ``` This shows the 'cache' of your process. Each response is: - A message with a signature attesting to its correctness - A hashpath describing its generation - Transferable to other AO-Core nodes for uninterrupted execution ### Query Parameters and Type Casting Beyond path segments, HyperBEAM URLs can include query parameters that utilize a special type casting syntax. This allows specifying the desired data type for a parameter directly within the URL using the format `key+type=value`. - **Syntax**: A `+` symbol separates the parameter key from its intended type (e.g., `count+integer=42`, `items+list="apple",7`). - **Mechanism**: The HyperBEAM node identifies the `+type` suffix (e.g., `+integer`, `+list`, `+map`, `+float`, `+atom`, `+resolve`). It then uses internal functions ([`hb_singleton:maybe_typed`](../resources/source-code/hb_singleton.md) and [`dev_codec_structured:decode_value`](../resources/source-code/dev_codec_structured.md)) to decode and cast the provided value string into the corresponding Erlang data type before incorporating it into the message. - **Supported Types**: Common types include `integer`, `float`, `list`, `map`, `atom`, `binary` (often implicit), and `resolve` (for path resolution). List values often follow the [HTTP Structured Fields format (RFC 8941)](https://www.rfc-editor.org/rfc/rfc8941.html). This powerful feature enables the expression of complex data structures directly in URLs. ## Examples The following examples illustrate using HyperPATH with various AO-Core processes and devices. While these cover a few specific use cases, HyperBEAM's extensible nature allows interaction with any device or process via HyperPATH. For a deeper understanding, we encourage exploring the [source code](https://github.com/permaweb/hyperbeam) and experimenting with different paths. ### Example 1: Accessing Full Process State To get the complete, real-time state of a process identified by ``, use the `/now` path component with the [`~process@1.0`](../devices/process-at-1-0.md) device: ```bash GET /~process@1.0/now ``` This instructs the AO-Core node to load the process and execute the `now` function on the [`~process@1.0`](../devices/process-at-1-0.md) device. ### Example 2: Navigating to Specific Process Data If a process maintains its state in a map and you want to access a specific field, like `at-slot`, using the faster `/compute` endpoint: ```bash GET /~process@1.0/compute/cache ``` This accesses the `compute` key on the [`~process@1.0`](../devices/process-at-1-0.md) device and then navigates to the `cache` key within the resulting state map. Using this path, you will see the latest 'cache' of your process (the number of interactions it has received). Every piece of relevant information about your process can be accessed similarly, effectively providing a native API. (Note: This represents direct navigation within the process state structure. For accessing data specifically published via the `~patch@1.0` device, see the documentation on [Exposing Process State](../build/migrating-from-legacynet.md#exposing-process-state-with-the-patch-device), which typically uses the `/cache/` path.) ### Example 3: Basic `~message@1.0` Usage Here's a simple example of using [`~message@1.0`](../devices/message-at-1-0.md) to create a message and retrieve a value: ```bash GET /~message@1.0&greeting="Hello"&count+integer=42/count ``` 1. **Base:** `/` - The base URL of the HyperBEAM node. 2. **Root Device:** [`~message@1.0`](../devices/message-at-1-0.md) 3. **Query Params:** `greeting="Hello"` (binary) and `count+integer=42` (integer), forming the message `#{ <<"greeting">> => <<"Hello">>, <<"count">> => 42 }`. 4. **Path:** `/count` tells `~message@1.0` to retrieve the value associated with the key `count`. **Response:** The integer `42`. ### Example 4: Using the `~message@1.0` Device with Type Casting The [`~message@1.0`](../devices/message-at-1-0.md) device can be used to construct and query transient messages, utilizing type casting in query parameters. Consider the following URL: ```bash GET /~message@1.0&name="Alice"&age+integer=30&items+list="apple",1,"banana"&config+map=key1="val1";key2=true/[PATH] ``` HyperBEAM processes this as follows: 1. **Base:** `/` - The base URL of the HyperBEAM node. 2. **Root Device:** [`~message@1.0`](../devices/message-at-1-0.md) 3. **Query Parameters (with type casting):** * `name="Alice"` -> `#{ <<"name">> => <<"Alice">> }` (binary) * `age+integer=30` -> `#{ <<"age">> => 30 }` (integer) * `items+list="apple",1,"banana"` -> `#{ <<"items">> => [<<"apple">>, 1, <<"banana">>] }` (list) * `config+map=key1="val1";key2=true` -> `#{ <<"config">> => #{<<"key1">> => <<"val1">>, <<"key2">> => true} }` (map) 4. **Initial Message Map:** A combination of the above key-value pairs. 5. **Path Evaluation:** * If `[PATH]` is `/items/1`, the response is the integer `1`. * If `[PATH]` is `/config/key1`, the response is the binary `<<"val1">>`. --- END OF FILE: docs/introduction/hyperpaths-in-hyperbeam.md --- --- START OF FILE: docs/introduction/what-is-ao-core.md --- # What is AO-Core? AO-Core is the foundational protocol underpinning the [AO Computer](https://ao.arweave.net). It defines a minimal, generalized model for decentralized computation built around standard web technologies like HTTP. Think of it as a way to interpret the Arweave permaweb not just as static storage, but as a dynamic, programmable, and infinitely scalable computing environment. ## Core Concepts AO-Core revolves around three fundamental components:

Messages

The smallest units of data and computation. Messages can be simple data blobs or maps of named functions.

They are the primary means of communication and triggering execution within the system.

Messages are cryptographically linked, forming a verifiable computation graph.

Devices

Modules responsible for interpreting and processing messages.

Each device defines specific logic for how messages are handled (e.g., executing WASM, storing data, relaying information).

This modular design allows nodes to specialize and the system to be highly extensible.

Paths

Structures that link messages over time, creating a verifiable history of computations.

Paths allow users to navigate the computation graph and access specific states or results.

They leverage HashPaths, cryptographic fingerprints representing the sequence of operations leading to a specific message state, ensuring traceability and integrity.

## Key Principles * **Minimalism:** AO-Core provides the simplest possible representation of data and computation, avoiding prescriptive consensus mechanisms or specific VM requirements. * **HTTP Native:** Designed for compatibility with HTTP protocols, making it accessible via standard web tools and infrastructure. * **Scalability:** By allowing parallel message processing and modular device execution, AO-Core enables hyper-parallel computing, overcoming the limitations of traditional sequential blockchains. * **Permissionlessness & Trustlessness:** While AO-Core itself is minimal, it provides the framework upon which higher-level protocols like AO can build systems that allow anyone to participate (`permissionlessness`) without needing to trust intermediaries (`trustlessness`). Users can choose their desired security and performance trade-offs. AO-Core transforms the permanent data storage of Arweave into a global, shared computation space, enabling the creation of complex, autonomous, and scalable decentralized applications. --- END OF FILE: docs/introduction/what-is-ao-core.md --- --- START OF FILE: docs/introduction/what-is-hyperbeam.md --- # What is HyperBEAM? hb-flag HyperBEAM is the primary, production-ready implementation of the [AO-Core protocol](./what-is-ao-core.md), built on the robust Erlang/OTP framework. It serves as a decentralized operating system, powering the AO Computer—a scalable, trust-minimized, distributed supercomputer built on permanent storage. HyperBEAM provides the runtime environment and essential services to execute AO-Core computations across a network of distributed nodes. ## Why HyperBEAM Matters HyperBEAM transforms the abstract concepts of AO-Core—such as [Messages](./what-is-ao-core.md#core-concepts), [Devices](./what-is-ao-core.md#core-concepts), and [Paths](./what-is-ao-core.md#core-concepts)—into a concrete, operational system. Here's why it's pivotal to the AO ecosystem: - **Modularity via Devices:** HyperBEAM introduces a uniquely modular architecture centered around [Devices](./hyperbeam-devices.md). These pluggable components define specific computational logic—like running WASM, managing state, or relaying data—allowing for unprecedented flexibility. Users can extend the system by creating custom Devices to fit their specific computational needs. - **Decentralized OS:** It equips nodes with the infrastructure to join the AO network, manage resources, execute computations, and communicate seamlessly. Built on the Erlang/OTP framework, HyperBEAM provides a robust and secure foundation that leverages the BEAM virtual machine for exceptional concurrency, fault tolerance, and scalability. This abstracts away underlying hardware, allowing diverse nodes to contribute resources without compatibility issues. The system governs how nodes coordinate and interact. In essence, HyperBEAM is the engine that drives the AO Computer, enabling a vision of decentralized, verifiable computing at scale. ## Core Components & Features - **Modular Devices:** The heart of HyperBEAM's extensibility. It includes essential built-in devices like [`~meta`](../devices/meta-at-1-0.md), [`~relay`](../devices/relay-at-1-0.md), [`~process`](../devices/process-at-1-0.md), [`~scheduler`](../devices/scheduler-at-1-0.md), and [`~wasm64`](../devices/wasm64-at-1-0.md) for core functionality, but the system is designed for easy addition of new custom devices. - **Message System:** Everything in HyperBEAM is a "Message" — a map of named functions or binary data that can be processed, transformed, and cryptographically verified. - **HTTP Interface:** Nodes expose an HTTP server for interaction via standard web requests and HyperPATHs, structured URLs that represent computation paths (effectively a sequence of state transformations for messages). ## Architecture * **Initialization Flow:** When a HyperBEAM node starts, it initializes the name service, scheduler registry, timestamp server, and HTTP server, establishing core services for process management, timing, communication, and storage. * **Compute Model:** Computation follows the pattern `Message1(Message2) => Message3`, where messages are resolved through their devices and [paths](./hyperpaths-in-hyperbeam.md). The integrity and history of these computations are ensured by **hashpaths**, which serves as a cryptographic audit trail. * **Scheduler System:** The scheduler component manages execution order using [slots](../devices/scheduler-at-1-0.md#slot-system) — sequential positions that guarantee deterministic computation. * **Process Slots:** Each process has numbered slots starting from 0 that track message execution order, ensuring consistent computation even across distributed nodes. ## HTTP API and Pathing HyperBEAM exposes a powerful HTTP API that allows for interacting with processes and accessing data through structured URL patterns. We call URLs that represent computation paths **[HyperPATHs](./hyperpaths-in-hyperbeam.md)**. The URL bar effectively functions as a command-line interface for AO's trustless and verifiable compute. For a comprehensive guide on constructing and interpreting paths in HyperBEAM, including detailed examples and best practices, see [HyperPATHs in HyperBEAM](./hyperpaths-in-hyperbeam.md). In essence, HyperBEAM is the engine that powers the AO Computer, enabling the vision of a scalable, trust-minimized, decentralized supercomputer built on permanent storage. --- END OF FILE: docs/introduction/what-is-hyperbeam.md --- --- START OF FILE: docs/resources/llms.md --- --- hide: - navigation - toc --- # LLM Context Files This section provides access to specially formatted files intended for consumption by Large Language Models (LLMs) to provide context about the HyperBEAM documentation. 1. **[LLM Summary (llms.txt)](../llms.txt)** * **Content**: Contains a brief summary of the HyperBEAM documentation structure and a list of relative file paths for all markdown documents included in the build. * **Usage**: Useful for providing an LLM with a high-level overview and the available navigation routes within the documentation. 2. **[LLM Full Content (llms-full.txt)](../llms-full.txt)** * **Content**: A single text file containing the complete, concatenated content of all markdown documents from the specified documentation directories (`begin`, `run`, `guides`, `devices`, `resources`). Each file's content is clearly demarcated. * **Usage**: Ideal for feeding the entire documentation content into an LLM for comprehensive context, analysis, or question-answering based on the full documentation set. > **Generation Process:** > These files are automatically generated by the `docs/build-all.sh` script during the documentation build process. They consolidate information from the following directories: `docs/introduction`, `docs/run`, `docs/build`, `docs/devices`, `docs/resources` --- END OF FILE: docs/resources/llms.md --- --- START OF FILE: docs/resources/reference/faq.md --- # Frequently Asked Questions This page answers common questions about HyperBEAM, its components, and how to use them effectively. ## General Questions ### What is HyperBEAM? HyperBEAM is a client implementation of the AO-Core protocol written in Erlang. It serves as the node software for a decentralized operating system that allows operators to offer computational resources to users in the AO network. ### How does HyperBEAM differ from other distributed systems? HyperBEAM focuses on true decentralization with asynchronous message passing between isolated processes. Unlike many distributed systems that rely on central coordination, HyperBEAM nodes can operate independently while still forming a cohesive network. Additionally, its Erlang foundation provides robust fault tolerance and concurrency capabilities. ### What can I build with HyperBEAM? You can build a wide range of applications, including: - Decentralized applications (dApps) - Distributed computation systems - Peer-to-peer services - Resilient microservices - IoT device networks - Decentralized storage solutions ### Is HyperBEAM open source? Yes, HyperBEAM is source available on [GitHub](https://github.com/permaweb/HyperBEAM) and licensed under the Business Source License. ### What is the current focus or phase of HyperBEAM development? The initial development phase focuses on integrating AO processes more deeply with HyperBEAM. A key part of this is phasing out the reliance on traditional "dryrun" simulations for reading process state. Instead, processes are encouraged to use the [~patch@1.0 device](../../resources/source-code/dev_patch.md) to expose specific parts of their state directly via HyperPATH GET requests. This allows for more efficient and direct state access, particularly for web interfaces and external integrations. You can learn more about this mechanism in the [Exposing Process State with the Patch Device](../../build/migrating-from-legacynet.md#exposing-process-state-with-the-patch-device) guide. ## Installation and Setup ### What are the system requirements for running HyperBEAM? Currently, HyperBEAM is primarily tested and documented for Ubuntu 22.04 and macOS. Other platforms will be added in future updates. For detailed requirements, see the [System Requirements](../../run/configuring-your-machine.md) page. ### Can I run HyperBEAM in a container? While technically possible, running HyperBEAM in Docker containers or other containerization technologies is currently not recommended. The containerization approach may introduce additional complexity and potential performance issues. We recommend running HyperBEAM directly on the host system until container support is more thoroughly tested and optimized. ### How do I update HyperBEAM to the latest version? To update HyperBEAM: 1. Pull the latest code from the repository 2. Rebuild the application 3. Restart the HyperBEAM service Specific update instructions will vary depending on your [installation method](../../run/running-a-hyperbeam-node.md). ### Can I run multiple HyperBEAM nodes on a single machine? Yes, you can run multiple HyperBEAM nodes on a single machine, but you'll need to configure them to use different ports and data directories to avoid conflicts. However, this is not recommended for production environments as each node should ideally have a unique IP address to properly participate in the network. Running multiple nodes on a single machine is primarily useful for development and testing purposes. ## Architecture and Components ### What is the difference between HyperBEAM and Compute Unit? - **HyperBEAM**: The Erlang-based node software that handles message routing, process management, and device coordination. - **Compute Unit (CU)**: A NodeJS implementation that executes WebAssembly modules and handles computational tasks. Together, these components form a complete execution environment for AO processes. ## Development and Usage ### What programming languages can I use with HyperBEAM? You can use any programming language that compiles to WebAssembly (WASM) for creating modules that run on the Compute Unit. This includes languages like: - Lua - Rust - C/C++ - And many others with WebAssembly support ### How do I debug processes running in HyperBEAM? Debugging processes in HyperBEAM can be done through: 1. Logging messages to the system log (`DEBUG=HB_PRINT rebar3 shell`) 2. Monitoring process state and message flow 3. Inspecting memory usage and performance metrics ### Is there a limit to how many processes can run on a node? The practical limit depends on your hardware resources. Erlang is designed to handle millions of lightweight processes efficiently, but the actual number will be determined by: - Available memory - CPU capacity - Network bandwidth - Storage speed - The complexity of your processes ## Troubleshooting ### What should I do if a node becomes unresponsive? If a node becomes unresponsive: 1. Check the node's logs for error messages 2. Verify network connectivity 3. Ensure sufficient system resources 4. Restart the node if necessary 5. Check for configuration issues For persistent problems, consult the [Troubleshooting](troubleshooting.md) page. ### Where can I get help if I encounter issues? If you encounter issues: - Check the [Troubleshooting](troubleshooting.md) guide - Search or ask questions on [GitHub Issues](https://github.com/permaweb/HyperBEAM/issues) - Join the community on [Discord](https://discord.gg/V3yjzrBxPM) --- END OF FILE: docs/resources/reference/faq.md --- --- START OF FILE: docs/resources/reference/glossary.md --- # Glossary This glossary provides definitions for terms and concepts used throughout the HyperBEAM documentation. For a comprehensive glossary of permaweb-specific terminology, check out the [permaweb glossary](#permaweb-glossary) section below. ## AO-Core Protocol The underlying protocol that HyperBEAM implements, enabling decentralized computing and communication between nodes. AO-Core provides a framework into which any number of different computational models, encapsulated as primitive devices, can be attached. ## Asynchronous Message Passing A communication paradigm where senders don't wait for receivers to be ready, allowing for non-blocking operations and better scalability. ## Checkpoint A saved state of a process that can be used to resume execution from a known point, used for persistence and recovery. ## Compute Unit (CU) The NodeJS component of HyperBEAM that executes WebAssembly modules and handles computational tasks. ## Decentralized Execution The ability to run processes across a distributed network without centralized control or coordination. ## Device A functional unit in HyperBEAM that provides specific capabilities to the system, such as storage, networking, or computational resources. ## Erlang The programming language used to implement the HyperBEAM core, known for its robustness and support for building distributed, fault-tolerant applications. ## ~flat@1.0 A format used for encoding settings files in HyperBEAM configuration, using HTTP header styling. ## Hashpaths A mechanism for referencing locations in a program's state-space prior to execution. These state-space links are represented as Merklized lists of programs inputs and initial states. ## HyperBEAM The Erlang-based node software that handles message routing, process management, and device coordination in the HyperBEAM ecosystem. ## Message A data structure used for communication between processes in the HyperBEAM system. Messages can be interpreted as a binary term or as a collection of named functions (a Map of functions). ## Module A unit of code that can be loaded and executed by the Compute Unit, typically in WebAssembly format. ## Node An instance of HyperBEAM running on a physical or virtual machine that participates in the distributed network. ## ~p4@1.0 A device that runs as a pre-processor and post-processor in HyperBEAM, enabling a framework for node operators to sell usage of their machine's hardware to execute AO-Core devices. ## Process An independent unit of computation in HyperBEAM with its own state and execution context. ## Process ID A unique identifier assigned to a process within the HyperBEAM system. ## ~scheduler@1.0 A device used to assign a linear hashpath to an execution, such that all users may access it with a deterministic ordering. ## ~compute-lite@1.0 A lightweight device wrapping a local WASM executor, used for executing legacynet AO processes inside HyperBEAM. ## ~json-iface@1.0 A device that offers a translation layer between the JSON-encoded message format used by legacy versions and HyperBEAM's native HTTP message format. ## ~meta@1.0 A device used to configure the node's hardware, supported devices, metering and payments information, amongst other configuration options. ## ~process@1.0 A device that enables users to create persistent, shared executions that can be accessed by any number of users, each of whom may add additional inputs to its hashpath. ## ~relay@1.0 A device used to relay messages between nodes and the wider HTTP network. It offers an interface for sending and receiving messages using a variety of execution strategies. ## ~simple-pay@1.0 A simple, flexible pricing device that can be used in conjunction with p4@1.0 to offer flat-fees for the execution of AO-Core messages. ## ~snp@1.0 A device used to generate and validate proofs that a node is executing inside a Trusted Execution Environment (TEE). ## ~wasm64@1.0 A device used to execute WebAssembly code, using the Web Assembly Micro-Runtime (WAMR) under-the-hood. ## ~stack@1.0 A device used to execute an ordered set of devices over the same inputs, allowing users to create complex combinations of other devices. ## Trusted Execution Environment (TEE) A secure area inside a processor that ensures the confidentiality and integrity of code and data loaded within it. Used in HyperBEAM for trust-minimized computation. ## WebAssembly (WASM) A binary instruction format that serves as a portable compilation target for programming languages, enabling deployment on the web and other environments. ## Permaweb Glossary For a more comprehensive glossary of terms used in the permaweb, try the [Permaweb Glossary](https://glossary.arweave.net). Or use it below:
--- END OF FILE: docs/resources/reference/glossary.md --- --- START OF FILE: docs/resources/reference/troubleshooting.md --- # Troubleshooting Guide This guide addresses common issues you might encounter when working with HyperBEAM and the Compute Unit. ## Installation Issues ### Erlang Installation Fails **Symptoms**: Errors during Erlang compilation or installation **Solutions**: - Ensure all required dependencies are installed: `sudo apt-get install -y libssl-dev ncurses-dev make cmake gcc g++` - Try configuring with fewer options: `./configure --without-wx --without-debugger --without-observer --without-et` - Check disk space, as compilation requires several GB of free space ### Rebar3 Bootstrap Fails **Symptoms**: Errors when running `./bootstrap` for Rebar3 **Solutions**: - Verify Erlang is correctly installed: `erl -eval 'erlang:display(erlang:system_info(otp_release)), halt().'` - Ensure you have the latest version of the repository: `git fetch && git reset --hard origin/master` - Try manually downloading a precompiled Rebar3 binary ## HyperBEAM Issues ### HyperBEAM Won't Start **Symptoms**: Errors when running `rebar3 shell` or the HyperBEAM startup command **Solutions**: - Check for port conflicts: Another service might be using the configured port - Verify the wallet key file exists and is accessible - Examine Erlang crash dumps for detailed error information - Ensure all required dependencies are installed ### HyperBEAM Crashes During Operation **Symptoms**: Unexpected termination of the HyperBEAM process **Solutions**: - Check system resources (memory, disk space) - Examine Erlang crash dumps for details - Reduce memory limits if the system is resource-constrained - Check for network connectivity issues if connecting to external services ## Compute Unit Issues ### Compute Unit Won't Start **Symptoms**: Errors when running `npm start` in the CU directory **Solutions**: - Verify Node.js is installed correctly: `node -v` - Ensure all dependencies are installed: `npm i` - Check that the wallet file exists and is correctly formatted - Verify the `.env` file has all required settings ### Memory Errors in Compute Unit **Symptoms**: Out of memory errors or excessive memory usage **Solutions**: - Adjust the `PROCESS_WASM_MEMORY_MAX_LIMIT` environment variable - Enable garbage collection by setting an appropriate `GC_INTERVAL_MS` - Monitor memory usage and adjust limits as needed - If on a low-memory system, reduce concurrent process execution ## Integration Issues ### HyperBEAM Can't Connect to Compute Unit **Symptoms**: Connection errors in HyperBEAM logs when trying to reach the CU **Solutions**: - Verify the CU is running: `curl http://localhost:6363` - Ensure there are no firewall rules blocking the connection - Verify network configuration if components are on different machines ### Process Execution Fails **Symptoms**: Errors when deploying or executing processes **Solutions**: - Check both HyperBEAM and CU logs for specific error messages - Verify that the WASM module is correctly compiled and valid - Test with a simple example process to isolate the issue - Adjust memory limits if the process requires more resources ## Getting Help If you're still experiencing issues after trying these troubleshooting steps: 1. Check the [GitHub repository](https://github.com/permaweb/HyperBEAM) for known issues 2. Join the [Discord community](https://discord.gg/V3yjzrBxPM) for support 3. Open an issue on GitHub with detailed information about your problem --- END OF FILE: docs/resources/reference/troubleshooting.md --- --- START OF FILE: docs/resources/source-code/ar_bundles.md --- # [Module ar_bundles.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/ar_bundles.erl) ## Function Index ##
add_bundle_tags/1*
add_list_tags/1*
add_manifest_tags/2*
ar_bundles_test_/0*
assert_data_item/7*
check_size/2*Force that a binary is either empty or the given number of bytes.
check_type/2*Ensure that a value is of the given type.
data_item_signature_data/1Generate the data segment to be signed for a data item.
data_item_signature_data/2*
decode_avro_name/3*
decode_avro_tags/2*Decode Avro blocks (for tags) from binary.
decode_avro_value/4*
decode_bundle_header/2*
decode_bundle_header/3*
decode_bundle_items/2*
decode_optional_field/1*
decode_signature/1*Decode the signature from a binary format.
decode_tags/1Decode tags from a binary format using Apache Avro.
decode_vint/3*
decode_zigzag/1*Decode a VInt encoded ZigZag integer from binary.
deserialize/1Convert binary data back to a #tx record.
deserialize/2
encode_avro_string/1*Encode a string for Avro using ZigZag and VInt encoding.
encode_optional_field/1*Encode an optional field (target, anchor) with a presence byte.
encode_signature_type/1*Only RSA 4096 is currently supported.
encode_tags/1Encode tags into a binary format using Apache Avro.
encode_tags_size/2*
encode_vint/1*Encode a ZigZag integer to VInt binary format.
encode_vint/2*
encode_zigzag/1*Encode an integer using ZigZag encoding.
enforce_valid_tx/1*Take an item and ensure that it is of valid form.
finalize_bundle_data/1*
find/2Find an item in a bundle-map/list and return it.
find_single_layer/2*An internal helper for finding an item in a single-layer of a bundle.
format/1
format/2
format_binary/1*
format_data/2*
format_line/2*
format_line/3*
hd/1Return the first item in a bundle-map/list.
id/1Return the ID of an item -- either signed or unsigned as specified.
id/2
is_signed/1Check if an item is signed.
manifest/1
manifest_item/1Return the manifest item in a bundle-map/list.
map/1Convert an item containing a map or list into an Erlang map.
maybe_map_to_list/1*
maybe_unbundle/1*
maybe_unbundle_map/1*
member/2Check if an item exists in a bundle-map/list.
new_item/4Create a new data item.
new_manifest/1*
normalize/1
normalize_data/1*Ensure that a data item (potentially containing a map or list) has a standard, serialized form.
normalize_data_size/1*Reset the data size of a data item.
ok_or_throw/3*Throw an error if the given value is not ok.
parse_manifest/1
print/1
reset_ids/1Re-calculate both of the IDs for an item.
run_test/0*
serialize/1Convert a #tx record to its binary representation.
serialize/2
serialize_bundle_data/2*
sign_item/2Sign a data item.
signer/1Return the address of the signer of an item, if it is signed.
test_basic_member_id/0*
test_bundle_map/0*
test_bundle_with_one_item/0*
test_bundle_with_two_items/0*
test_deep_member/0*
test_empty_bundle/0*
test_extremely_large_bundle/0*
test_no_tags/0*
test_recursive_bundle/0*
test_serialize_deserialize_deep_signed_bundle/0*
test_unsigned_data_item_id/0*
test_unsigned_data_item_normalization/0*
test_with_tags/0*
test_with_zero_length_tag/0*
to_serialized_pair/1*
type/1
unbundle/1*
unbundle_list/1*
update_ids/1*Take an item and ensure that both the unsigned and signed IDs are appropriately set.
utf8_encoded/1*Encode a UTF-8 string to binary.
verify_data_item_id/1*Verify the data item's ID matches the signature.
verify_data_item_signature/1*Verify the data item's signature.
verify_data_item_tags/1*Verify the validity of the data item's tags.
verify_item/1Verify the validity of a data item.
## Function Details ## ### add_bundle_tags/1 * ### `add_bundle_tags(Tags) -> any()` ### add_list_tags/1 * ### `add_list_tags(Tags) -> any()` ### add_manifest_tags/2 * ### `add_manifest_tags(Tags, ManifestID) -> any()` ### ar_bundles_test_/0 * ### `ar_bundles_test_() -> any()` ### assert_data_item/7 * ### `assert_data_item(KeyType, Owner, Target, Anchor, Tags, Data, DataItem) -> any()` ### check_size/2 * ### `check_size(Bin, Sizes) -> any()` Force that a binary is either empty or the given number of bytes. ### check_type/2 * ### `check_type(Value, X2) -> any()` Ensure that a value is of the given type. ### data_item_signature_data/1 ### `data_item_signature_data(RawItem) -> any()` Generate the data segment to be signed for a data item. ### data_item_signature_data/2 * ### `data_item_signature_data(RawItem, X2) -> any()` ### decode_avro_name/3 * ### `decode_avro_name(NameSize, Rest, Count) -> any()` ### decode_avro_tags/2 * ### `decode_avro_tags(Binary, Count) -> any()` Decode Avro blocks (for tags) from binary. ### decode_avro_value/4 * ### `decode_avro_value(ValueSize, Name, Rest, Count) -> any()` ### decode_bundle_header/2 * ### `decode_bundle_header(Count, Bin) -> any()` ### decode_bundle_header/3 * ### `decode_bundle_header(Count, ItemsBin, Header) -> any()` ### decode_bundle_items/2 * ### `decode_bundle_items(RestItems, ItemsBin) -> any()` ### decode_optional_field/1 * ### `decode_optional_field(X1) -> any()` ### decode_signature/1 * ### `decode_signature(Other) -> any()` Decode the signature from a binary format. Only RSA 4096 is currently supported. Note: the signature type '1' corresponds to RSA 4096 - but it is is written in little-endian format which is why we match on `<<1, 0>>`. ### decode_tags/1 ### `decode_tags(X1) -> any()` Decode tags from a binary format using Apache Avro. ### decode_vint/3 * ### `decode_vint(X1, Result, Shift) -> any()` ### decode_zigzag/1 * ### `decode_zigzag(Binary) -> any()` Decode a VInt encoded ZigZag integer from binary. ### deserialize/1 ### `deserialize(Binary) -> any()` Convert binary data back to a #tx record. ### deserialize/2 ### `deserialize(Item, X2) -> any()` ### encode_avro_string/1 * ### `encode_avro_string(String) -> any()` Encode a string for Avro using ZigZag and VInt encoding. ### encode_optional_field/1 * ### `encode_optional_field(Field) -> any()` Encode an optional field (target, anchor) with a presence byte. ### encode_signature_type/1 * ### `encode_signature_type(X1) -> any()` Only RSA 4096 is currently supported. Note: the signature type '1' corresponds to RSA 4096 -- but it is is written in little-endian format which is why we encode to `<<1, 0>>`. ### encode_tags/1 ### `encode_tags(Tags) -> any()` Encode tags into a binary format using Apache Avro. ### encode_tags_size/2 * ### `encode_tags_size(Tags, EncodedTags) -> any()` ### encode_vint/1 * ### `encode_vint(ZigZag) -> any()` Encode a ZigZag integer to VInt binary format. ### encode_vint/2 * ### `encode_vint(ZigZag, Acc) -> any()` ### encode_zigzag/1 * ### `encode_zigzag(Int) -> any()` Encode an integer using ZigZag encoding. ### enforce_valid_tx/1 * ### `enforce_valid_tx(List) -> any()` Take an item and ensure that it is of valid form. Useful for ensuring that a message is viable for serialization/deserialization before execution. This function should throw simple, easy to follow errors to aid devs in debugging issues. ### finalize_bundle_data/1 * ### `finalize_bundle_data(Processed) -> any()` ### find/2 ### `find(Key, Map) -> any()` Find an item in a bundle-map/list and return it. ### find_single_layer/2 * ### `find_single_layer(UnsignedID, TX) -> any()` An internal helper for finding an item in a single-layer of a bundle. Does not recurse! You probably want `find/2` in most cases. ### format/1 ### `format(Item) -> any()` ### format/2 ### `format(Item, Indent) -> any()` ### format_binary/1 * ### `format_binary(Bin) -> any()` ### format_data/2 * ### `format_data(Item, Indent) -> any()` ### format_line/2 * ### `format_line(Str, Indent) -> any()` ### format_line/3 * ### `format_line(RawStr, Fmt, Ind) -> any()` ### hd/1 ### `hd(Tx) -> any()` Return the first item in a bundle-map/list. ### id/1 ### `id(Item) -> any()` Return the ID of an item -- either signed or unsigned as specified. If the item is unsigned and the user requests the signed ID, we return the atom `not_signed`. In all other cases, we return the ID of the item. ### id/2 ### `id(Item, Type) -> any()` ### is_signed/1 ### `is_signed(Item) -> any()` Check if an item is signed. ### manifest/1 ### `manifest(Map) -> any()` ### manifest_item/1 ### `manifest_item(Tx) -> any()` Return the manifest item in a bundle-map/list. ### map/1 ### `map(Tx) -> any()` Convert an item containing a map or list into an Erlang map. ### maybe_map_to_list/1 * ### `maybe_map_to_list(Item) -> any()` ### maybe_unbundle/1 * ### `maybe_unbundle(Item) -> any()` ### maybe_unbundle_map/1 * ### `maybe_unbundle_map(Bundle) -> any()` ### member/2 ### `member(Key, Item) -> any()` Check if an item exists in a bundle-map/list. ### new_item/4 ### `new_item(Target, Anchor, Tags, Data) -> any()` Create a new data item. Should only be used for testing. ### new_manifest/1 * ### `new_manifest(Index) -> any()` ### normalize/1 ### `normalize(Item) -> any()` ### normalize_data/1 * ### `normalize_data(Bundle) -> any()` Ensure that a data item (potentially containing a map or list) has a standard, serialized form. ### normalize_data_size/1 * ### `normalize_data_size(Item) -> any()` Reset the data size of a data item. Assumes that the data is already normalized. ### ok_or_throw/3 * ### `ok_or_throw(TX, X2, Error) -> any()` Throw an error if the given value is not ok. ### parse_manifest/1 ### `parse_manifest(Item) -> any()` ### print/1 ### `print(Item) -> any()` ### reset_ids/1 ### `reset_ids(Item) -> any()` Re-calculate both of the IDs for an item. This is a wrapper function around `update_id/1` that ensures both IDs are set from scratch. ### run_test/0 * ### `run_test() -> any()` ### serialize/1 ### `serialize(TX) -> any()` Convert a #tx record to its binary representation. ### serialize/2 ### `serialize(TX, X2) -> any()` ### serialize_bundle_data/2 * ### `serialize_bundle_data(Map, Manifest) -> any()` ### sign_item/2 ### `sign_item(RawItem, X2) -> any()` Sign a data item. ### signer/1 ### `signer(Tx) -> any()` Return the address of the signer of an item, if it is signed. ### test_basic_member_id/0 * ### `test_basic_member_id() -> any()` ### test_bundle_map/0 * ### `test_bundle_map() -> any()` ### test_bundle_with_one_item/0 * ### `test_bundle_with_one_item() -> any()` ### test_bundle_with_two_items/0 * ### `test_bundle_with_two_items() -> any()` ### test_deep_member/0 * ### `test_deep_member() -> any()` ### test_empty_bundle/0 * ### `test_empty_bundle() -> any()` ### test_extremely_large_bundle/0 * ### `test_extremely_large_bundle() -> any()` ### test_no_tags/0 * ### `test_no_tags() -> any()` ### test_recursive_bundle/0 * ### `test_recursive_bundle() -> any()` ### test_serialize_deserialize_deep_signed_bundle/0 * ### `test_serialize_deserialize_deep_signed_bundle() -> any()` ### test_unsigned_data_item_id/0 * ### `test_unsigned_data_item_id() -> any()` ### test_unsigned_data_item_normalization/0 * ### `test_unsigned_data_item_normalization() -> any()` ### test_with_tags/0 * ### `test_with_tags() -> any()` ### test_with_zero_length_tag/0 * ### `test_with_zero_length_tag() -> any()` ### to_serialized_pair/1 * ### `to_serialized_pair(Item) -> any()` ### type/1 ### `type(Item) -> any()` ### unbundle/1 * ### `unbundle(Item) -> any()` ### unbundle_list/1 * ### `unbundle_list(Item) -> any()` ### update_ids/1 * ### `update_ids(Item) -> any()` Take an item and ensure that both the unsigned and signed IDs are appropriately set. This function is structured to fall through all cases of poorly formed items, recursively ensuring its correctness for each case until the item has a coherent set of IDs. The cases in turn are: - The item has no unsigned_id. This is never valid. - The item has the default signature and ID. This is valid. - The item has the default signature but a non-default ID. Reset the ID. - The item has a signature. We calculate the ID from the signature. - Valid: The item is fully formed and has both an unsigned and signed ID. ### utf8_encoded/1 * ### `utf8_encoded(String) -> any()` Encode a UTF-8 string to binary. ### verify_data_item_id/1 * ### `verify_data_item_id(DataItem) -> any()` Verify the data item's ID matches the signature. ### verify_data_item_signature/1 * ### `verify_data_item_signature(DataItem) -> any()` Verify the data item's signature. ### verify_data_item_tags/1 * ### `verify_data_item_tags(DataItem) -> any()` Verify the validity of the data item's tags. ### verify_item/1 ### `verify_item(DataItem) -> any()` Verify the validity of a data item. --- END OF FILE: docs/resources/source-code/ar_bundles.md --- --- START OF FILE: docs/resources/source-code/ar_deep_hash.md --- # [Module ar_deep_hash.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/ar_deep_hash.erl) ## Function Index ##
hash/1
hash_bin/1*
hash_bin_or_list/1*
hash_list/2*
## Function Details ## ### hash/1 ### `hash(List) -> any()` ### hash_bin/1 * ### `hash_bin(Bin) -> any()` ### hash_bin_or_list/1 * ### `hash_bin_or_list(Bin) -> any()` ### hash_list/2 * ### `hash_list(List, Acc) -> any()` --- END OF FILE: docs/resources/source-code/ar_deep_hash.md --- --- START OF FILE: docs/resources/source-code/ar_rate_limiter.md --- # [Module ar_rate_limiter.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/ar_rate_limiter.erl) __Behaviours:__ [`gen_server`](gen_server.md). ## Function Index ##
cut_trace/4*
handle_call/3
handle_cast/2
handle_info/2
init/1
off/0Turn rate limiting off.
on/0Turn rate limiting on.
start_link/1
terminate/2
throttle/3Hang until it is safe to make another request to the given Peer with the given Path.
throttle2/3*
## Function Details ## ### cut_trace/4 * ### `cut_trace(N, Trace, Now, Opts) -> any()` ### handle_call/3 ### `handle_call(Request, From, State) -> any()` ### handle_cast/2 ### `handle_cast(Cast, State) -> any()` ### handle_info/2 ### `handle_info(Message, State) -> any()` ### init/1 ### `init(Opts) -> any()` ### off/0 ### `off() -> any()` Turn rate limiting off. ### on/0 ### `on() -> any()` Turn rate limiting on. ### start_link/1 ### `start_link(Opts) -> any()` ### terminate/2 ### `terminate(Reason, State) -> any()` ### throttle/3 ### `throttle(Peer, Path, Opts) -> any()` Hang until it is safe to make another request to the given Peer with the given Path. The limits are configured in include/ar_blacklist_middleware.hrl. ### throttle2/3 * ### `throttle2(Peer, Path, Opts) -> any()` --- END OF FILE: docs/resources/source-code/ar_rate_limiter.md --- --- START OF FILE: docs/resources/source-code/ar_timestamp.md --- # [Module ar_timestamp.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/ar_timestamp.erl) ## Function Index ##
cache/1*Cache the current timestamp from Arweave.
get/0Get the current timestamp from the server, starting the server if it isn't already running.
refresher/1*Refresh the timestamp cache periodically.
spawn_server/0*Spawn a new server and its refresher.
start/0Check if the server is already running, and if not, start it.
## Function Details ## ### cache/1 * ### `cache(Current) -> any()` Cache the current timestamp from Arweave. ### get/0 ### `get() -> any()` Get the current timestamp from the server, starting the server if it isn't already running. ### refresher/1 * ### `refresher(TSServer) -> any()` Refresh the timestamp cache periodically. ### spawn_server/0 * ### `spawn_server() -> any()` Spawn a new server and its refresher. ### start/0 ### `start() -> any()` Check if the server is already running, and if not, start it. --- END OF FILE: docs/resources/source-code/ar_timestamp.md --- --- START OF FILE: docs/resources/source-code/ar_tx.md --- # [Module ar_tx.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/ar_tx.erl) The module with utilities for transaction creation, signing, and verification. ## Function Index ##
collect_validation_results/2*
do_verify/2*Verify transaction.
json_struct_to_tx/1
new/4Create a new transaction.
new/5
sign/2Cryptographically sign (claim ownership of) a transaction.
signature_data_segment/1*Generate the data segment to be signed for a given TX.
tx_to_json_struct/1
verify/1Verify whether a transaction is valid.
verify_hash/1*Verify that the transaction's ID is a hash of its signature.
verify_signature/2*Verify the transaction's signature.
verify_tx_id/2Verify the given transaction actually has the given identifier.
## Function Details ## ### collect_validation_results/2 * ### `collect_validation_results(TXID, Checks) -> any()` ### do_verify/2 * ### `do_verify(TX, VerifySignature) -> any()` Verify transaction. ### json_struct_to_tx/1 ### `json_struct_to_tx(TXStruct) -> any()` ### new/4 ### `new(Dest, Reward, Qty, Last) -> any()` Create a new transaction. ### new/5 ### `new(Dest, Reward, Qty, Last, SigType) -> any()` ### sign/2 ### `sign(TX, X2) -> any()` Cryptographically sign (claim ownership of) a transaction. ### signature_data_segment/1 * ### `signature_data_segment(TX) -> any()` Generate the data segment to be signed for a given TX. ### tx_to_json_struct/1 ### `tx_to_json_struct(Tx) -> any()` ### verify/1 ### `verify(TX) -> any()` Verify whether a transaction is valid. ### verify_hash/1 * ### `verify_hash(Tx) -> any()` Verify that the transaction's ID is a hash of its signature. ### verify_signature/2 * ### `verify_signature(TX, X2) -> any()` Verify the transaction's signature. ### verify_tx_id/2 ### `verify_tx_id(ExpectedID, Tx) -> any()` Verify the given transaction actually has the given identifier. --- END OF FILE: docs/resources/source-code/ar_tx.md --- --- START OF FILE: docs/resources/source-code/ar_wallet.md --- # [Module ar_wallet.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/ar_wallet.erl) ## Function Index ##
compress_ecdsa_pubkey/1*
hash_address/1*
hmac/1
hmac/2
load_key/1Read the keyfile for the key with the given address from disk.
load_key/2Read the keyfile for the key with the given address from disk.
load_keyfile/1Extract the public and private key from a keyfile.
load_keyfile/2Extract the public and private key from a keyfile.
new/0
new/1
new_keyfile/2Generate a new wallet public and private key, with a corresponding keyfile.
sign/2Sign some data with a private key.
sign/3sign some data, hashed using the provided DigestType.
to_address/1Generate an address from a public key.
to_address/2
to_ecdsa_address/1*
to_pubkey/1Find a public key from a wallet.
to_pubkey/2
to_rsa_address/1*
verify/3Verify that a signature is correct.
verify/4
wallet_filepath/1*
wallet_filepath/3*
wallet_filepath2/1*
wallet_name/3*
## Function Details ## ### compress_ecdsa_pubkey/1 * ### `compress_ecdsa_pubkey(X1) -> any()` ### hash_address/1 * ### `hash_address(PubKey) -> any()` ### hmac/1 ### `hmac(Data) -> any()` ### hmac/2 ### `hmac(Data, DigestType) -> any()` ### load_key/1 ### `load_key(Addr) -> any()` Read the keyfile for the key with the given address from disk. Return not_found if arweave_keyfile_[addr].json or [addr].json is not found in [data_dir]/?WALLET_DIR. ### load_key/2 ### `load_key(Addr, Opts) -> any()` Read the keyfile for the key with the given address from disk. Return not_found if arweave_keyfile_[addr].json or [addr].json is not found in [data_dir]/?WALLET_DIR. ### load_keyfile/1 ### `load_keyfile(File) -> any()` Extract the public and private key from a keyfile. ### load_keyfile/2 ### `load_keyfile(File, Opts) -> any()` Extract the public and private key from a keyfile. ### new/0 ### `new() -> any()` ### new/1 ### `new(KeyType) -> any()` ### new_keyfile/2 ### `new_keyfile(KeyType, WalletName) -> any()` Generate a new wallet public and private key, with a corresponding keyfile. The provided key is used as part of the file name. ### sign/2 ### `sign(Key, Data) -> any()` Sign some data with a private key. ### sign/3 ### `sign(X1, Data, DigestType) -> any()` sign some data, hashed using the provided DigestType. ### to_address/1 ### `to_address(Pubkey) -> any()` Generate an address from a public key. ### to_address/2 ### `to_address(PubKey, X2) -> any()` ### to_ecdsa_address/1 * ### `to_ecdsa_address(PubKey) -> any()` ### to_pubkey/1 ### `to_pubkey(Pubkey) -> any()` Find a public key from a wallet. ### to_pubkey/2 ### `to_pubkey(PubKey, X2) -> any()` ### to_rsa_address/1 * ### `to_rsa_address(PubKey) -> any()` ### verify/3 ### `verify(Key, Data, Sig) -> any()` Verify that a signature is correct. ### verify/4 ### `verify(X1, Data, Sig, DigestType) -> any()` ### wallet_filepath/1 * ### `wallet_filepath(Wallet) -> any()` ### wallet_filepath/3 * ### `wallet_filepath(WalletName, PubKey, KeyType) -> any()` ### wallet_filepath2/1 * ### `wallet_filepath2(Wallet) -> any()` ### wallet_name/3 * ### `wallet_name(WalletName, PubKey, KeyType) -> any()` --- END OF FILE: docs/resources/source-code/ar_wallet.md --- --- START OF FILE: docs/resources/source-code/dev_apply.md --- # [Module dev_apply.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_apply.erl) A device that executes AO resolutions. ## Description ## It can be passed either a key that points to a singleton message or list of messages to resolve, or a `base` and `request` pair to execute together via invoking the `pair` key. When given a message with a `base` and `request` key, the default handler will invoke `pair` upon it, setting the `path` in the resulting request to the key that `apply` was invoked with. If no `base` or `request` key is present, the default handler will invoke `eval` upon the given message, using the given key as the `source` of the message/list of messages to resolve. Paths found in keys interpreted by this device can contain a `base:` or `request:` prefix to indicate the message from which the path should be retrieved. If no such prefix is present, the `Request` message is checked first, and the `Base` message is checked second. ## Function Index ##
apply_over_http_test/0*
default/4*The default handler.
error_to_message/1*Convert an error to a message.
eval/3*Apply the request's source key.
find_message/4*Find the value of the source key, supporting base: and request: prefixes.
find_path/4*Resolve the given path on the message as message@1.0.
info/1The device info.
normalize_path/1*Normalize the path.
pair/3Apply the message found at request to the message found at base.
pair/4*
resolve_key_test/0*
resolve_pair_test/0*
resolve_with_prefix_test/0*
reverse_resolve_pair_test/0*
## Function Details ## ### apply_over_http_test/0 * ### `apply_over_http_test() -> any()` ### default/4 * ### `default(Key, Base, Request, Opts) -> any()` The default handler. If the `base` and `request` keys are present in the given request, then the `pair` function is called. Otherwise, the `eval` key is used to resolve the request. ### error_to_message/1 * ### `error_to_message(Error) -> any()` Convert an error to a message. ### eval/3 * ### `eval(Base, Request, Opts) -> any()` Apply the request's `source` key. If this key is invoked as a result of the default handler, the `source` key is set to the key of the request. ### find_message/4 * ### `find_message(Path, Base, Request, Opts) -> any()` Find the value of the source key, supporting `base:` and `request:` prefixes. ### find_path/4 * ### `find_path(Path, Base, Request, Opts) -> any()` Resolve the given path on the message as `message@1.0`. ### info/1 ### `info(X1) -> any()` The device info. Forwards all keys aside `pair`, `keys` and `set` are resolved with the `apply/4` function. ### normalize_path/1 * ### `normalize_path(Path) -> any()` Normalize the path. ### pair/3 ### `pair(Base, Request, Opts) -> any()` Apply the message found at `request` to the message found at `base`. ### pair/4 * ### `pair(PathToSet, Base, Request, Opts) -> any()` ### resolve_key_test/0 * ### `resolve_key_test() -> any()` ### resolve_pair_test/0 * ### `resolve_pair_test() -> any()` ### resolve_with_prefix_test/0 * ### `resolve_with_prefix_test() -> any()` ### reverse_resolve_pair_test/0 * ### `reverse_resolve_pair_test() -> any()` --- END OF FILE: docs/resources/source-code/dev_apply.md --- --- START OF FILE: docs/resources/source-code/dev_cache.md --- # [Module dev_cache.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_cache.erl) A device that looks up an ID from a local store and returns it, honoring the `accept` key to return the correct format. ## Description ## The cache also supports writing messages to the store, if the node message has the writer's address in its `cache_writers` key. ## Function Index ##
cache_write_binary_test/0*Ensure that we can write direct binaries to the cache.
cache_write_message_test/0*Test that the cache can be written to and read from using the hb_cache API.
is_trusted_writer/2*Verify that the request originates from a trusted writer.
link/3Link a source to a destination in the cache.
read/3Read data from the cache.
read_from_cache/2*Read data from the cache via HTTP.
setup_test_env/0*Create a test environment with a local store and node.
write/3Write data to the cache.
write_single/2*Helper function to write a single data item to the cache.
write_to_cache/3*Write data to the cache via HTTP.
## Function Details ## ### cache_write_binary_test/0 * ### `cache_write_binary_test() -> any()` Ensure that we can write direct binaries to the cache. ### cache_write_message_test/0 * ### `cache_write_message_test() -> any()` Test that the cache can be written to and read from using the hb_cache API. ### is_trusted_writer/2 * ### `is_trusted_writer(Req, Opts) -> any()` Verify that the request originates from a trusted writer. Checks that the single signer of the request is present in the list of trusted cache writer addresses specified in the options. ### link/3 ### `link(Base, Req, Opts) -> any()` Link a source to a destination in the cache. ### read/3 ### `read(M1, M2, Opts) -> any()` Read data from the cache. Retrieves data corresponding to a key from a local store. The key is extracted from the incoming message under <<"target">>. The options map may include store configuration. If the "accept" header is set to <<"application/aos-2">>, the result is converted to a JSON structure and encoded. ### read_from_cache/2 * ### `read_from_cache(Node, Path) -> any()` Read data from the cache via HTTP. Constructs a GET request using the provided path, sends it to the node, and returns the response. ### setup_test_env/0 * ### `setup_test_env() -> any()` Create a test environment with a local store and node. Ensures that the required application is started, configures a local file-system store, resets the store for a clean state, creates a wallet for signing requests, and starts a node with the store and trusted cache writer configuration. ### write/3 ### `write(M1, M2, Opts) -> any()` Write data to the cache. Processes a write request by first verifying that the request comes from a trusted writer (as defined by the `cache_writers` configuration in the options). The write type is determined from the message ("single" or "batch") and the data is stored accordingly. ### write_single/2 * ### `write_single(Msg, Opts) -> any()` Helper function to write a single data item to the cache. Extracts the body, location, and operation from the message. Depending on the type of data (map or binary) or if a link operation is requested, it writes the data to the store using the appropriate function. ### write_to_cache/3 * ### `write_to_cache(Node, Data, Wallet) -> any()` Write data to the cache via HTTP. Constructs a write request message with the provided data, signs it with the given wallet, sends it to the node, and verifies that the response indicates a successful write. --- END OF FILE: docs/resources/source-code/dev_cache.md --- --- START OF FILE: docs/resources/source-code/dev_cacheviz.md --- # [Module dev_cacheviz.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_cacheviz.erl) A device that generates renders (or renderable dot output) of a node's cache. ## Function Index ##
dot/3Output the dot representation of the cache, or a specific path within the cache set by the target key in the request.
index/3Return a renderer in HTML form for the JSON format.
js/3Return a JS library that can be used to render the JSON format.
json/3Return a JSON representation of the cache graph, suitable for use with the graph.js library.
svg/3Output the SVG representation of the cache, or a specific path within the cache set by the target key in the request.
## Function Details ## ### dot/3 ### `dot(X1, Req, Opts) -> any()` Output the dot representation of the cache, or a specific path within the cache set by the `target` key in the request. ### index/3 ### `index(Base, X2, Opts) -> any()` Return a renderer in HTML form for the JSON format. ### js/3 ### `js(X1, X2, Opts) -> any()` Return a JS library that can be used to render the JSON format. ### json/3 ### `json(Base, Req, Opts) -> any()` Return a JSON representation of the cache graph, suitable for use with the `graph.js` library. If the request specifies a `target` key, we use that target. Otherwise, we generate a new target by writing the message to the cache and using the ID of the written message. ### svg/3 ### `svg(Base, Req, Opts) -> any()` Output the SVG representation of the cache, or a specific path within the cache set by the `target` key in the request. --- END OF FILE: docs/resources/source-code/dev_cacheviz.md --- --- START OF FILE: docs/resources/source-code/dev_codec_ans104.md --- # [Module dev_codec_ans104.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_codec_ans104.erl) Codec for managing transformations from `ar_bundles`-style Arweave TX records to and from TABMs. ## Function Index ##
commit/3Sign a message using the priv_wallet key in the options.
content_type/1Return the content type for the codec.
deduplicating_from_list/2*Deduplicate a list of key-value pairs by key, generating a list of values for each normalized key if there are duplicates.
deserialize/3Deserialize a binary ans104 message to a TABM.
do_from/3*
encoded_tags_to_map/1*Convert an ANS-104 encoded tag list into a HyperBEAM-compatible map.
from/3Convert a #tx record into a message map recursively.
from_maintains_tag_name_case_test/0*
normal_tags/1*Check whether a list of key-value pairs contains only normalized keys.
normal_tags_test/0*
only_committed_maintains_target_test/0*
restore_tag_name_case_from_cache_test/0*
serialize/3Serialize a message or TX to a binary.
signed_duplicated_tag_name_test/0*
simple_signed_to_httpsig_test_disabled/0*
simple_to_conversion_test/0*
tag_map_to_encoded_tags/1*Convert a HyperBEAM-compatible map into an ANS-104 encoded tag list, recreating the original order of the tags.
to/3Internal helper to translate a message to its #tx record representation, which can then be used by ar_bundles to serialize the message.
type_tag_test/0*
unsigned_duplicated_tag_name_test/0*
verify/3Verify an ANS-104 commitment.
## Function Details ## ### commit/3 ### `commit(Msg, Req, Opts) -> any()` Sign a message using the `priv_wallet` key in the options. Supports both the `hmac-sha256` and `rsa-pss-sha256` algorithms, offering unsigned and signed commitments. ### content_type/1 ### `content_type(X1) -> any()` Return the content type for the codec. ### deduplicating_from_list/2 * ### `deduplicating_from_list(Tags, Opts) -> any()` Deduplicate a list of key-value pairs by key, generating a list of values for each normalized key if there are duplicates. ### deserialize/3 ### `deserialize(Binary, Req, Opts) -> any()` Deserialize a binary ans104 message to a TABM. ### do_from/3 * ### `do_from(RawTX, Req, Opts) -> any()` ### encoded_tags_to_map/1 * ### `encoded_tags_to_map(Tags) -> any()` Convert an ANS-104 encoded tag list into a HyperBEAM-compatible map. ### from/3 ### `from(Binary, Req, Opts) -> any()` Convert a #tx record into a message map recursively. ### from_maintains_tag_name_case_test/0 * ### `from_maintains_tag_name_case_test() -> any()` ### normal_tags/1 * ### `normal_tags(Tags) -> any()` Check whether a list of key-value pairs contains only normalized keys. ### normal_tags_test/0 * ### `normal_tags_test() -> any()` ### only_committed_maintains_target_test/0 * ### `only_committed_maintains_target_test() -> any()` ### restore_tag_name_case_from_cache_test/0 * ### `restore_tag_name_case_from_cache_test() -> any()` ### serialize/3 ### `serialize(Msg, Req, Opts) -> any()` Serialize a message or TX to a binary. ### signed_duplicated_tag_name_test/0 * ### `signed_duplicated_tag_name_test() -> any()` ### simple_signed_to_httpsig_test_disabled/0 * ### `simple_signed_to_httpsig_test_disabled() -> any()` ### simple_to_conversion_test/0 * ### `simple_to_conversion_test() -> any()` ### tag_map_to_encoded_tags/1 * ### `tag_map_to_encoded_tags(TagMap) -> any()` Convert a HyperBEAM-compatible map into an ANS-104 encoded tag list, recreating the original order of the tags. ### to/3 ### `to(Binary, Req, Opts) -> any()` Internal helper to translate a message to its #tx record representation, which can then be used by ar_bundles to serialize the message. We call the message's device in order to get the keys that we will be checkpointing. We do this recursively to handle nested messages. The base case is that we hit a binary, which we return as is. ### type_tag_test/0 * ### `type_tag_test() -> any()` ### unsigned_duplicated_tag_name_test/0 * ### `unsigned_duplicated_tag_name_test() -> any()` ### verify/3 ### `verify(Msg, Req, Opts) -> any()` Verify an ANS-104 commitment. --- END OF FILE: docs/resources/source-code/dev_codec_ans104.md --- --- START OF FILE: docs/resources/source-code/dev_codec_flat.md --- # [Module dev_codec_flat.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_codec_flat.erl) A codec for turning TABMs into/from flat Erlang maps that have (potentially multi-layer) paths as their keys, and a normal TABM binary as their value. ## Function Index ##
binary_passthrough_test/0*
commit/3
deep_nesting_test/0*
deserialize/1
empty_map_test/0*
from/3Convert a flat map to a TABM.
inject_at_path/3*
inject_at_path/4*
multiple_paths_test/0*
nested_conversion_test/0*
path_list_test/0*
serialize/1
serialize/2
simple_conversion_test/0*
to/3Convert a TABM to a flat map.
verify/3
## Function Details ## ### binary_passthrough_test/0 * ### `binary_passthrough_test() -> any()` ### commit/3 ### `commit(Msg, Req, Opts) -> any()` ### deep_nesting_test/0 * ### `deep_nesting_test() -> any()` ### deserialize/1 ### `deserialize(Bin) -> any()` ### empty_map_test/0 * ### `empty_map_test() -> any()` ### from/3 ### `from(Bin, Req, Opts) -> any()` Convert a flat map to a TABM. ### inject_at_path/3 * ### `inject_at_path(Rest, Value, Map) -> any()` ### inject_at_path/4 * ### `inject_at_path(Rest, Value, Map, Opts) -> any()` ### multiple_paths_test/0 * ### `multiple_paths_test() -> any()` ### nested_conversion_test/0 * ### `nested_conversion_test() -> any()` ### path_list_test/0 * ### `path_list_test() -> any()` ### serialize/1 ### `serialize(Map) -> any()` ### serialize/2 ### `serialize(Map, Opts) -> any()` ### simple_conversion_test/0 * ### `simple_conversion_test() -> any()` ### to/3 ### `to(Bin, Req, Opts) -> any()` Convert a TABM to a flat map. ### verify/3 ### `verify(Msg, Req, Opts) -> any()` --- END OF FILE: docs/resources/source-code/dev_codec_flat.md --- --- START OF FILE: docs/resources/source-code/dev_codec_httpsig_conv.md --- # [Module dev_codec_httpsig_conv.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_codec_httpsig_conv.erl) A codec that marshals TABM encoded messages to and from the "HTTP" message structure. ## Description ## Every HTTP message is an HTTP multipart message. See https://datatracker.ietf.org/doc/html/rfc7578 For each TABM Key: The Key/Value Pair will be encoded according to the following rules: "signatures" -> {SignatureInput, Signature} header Tuples, each encoded as a Structured Field Dictionary "body" -> - if a map, then recursively encode as its own HyperBEAM message - otherwise encode as a normal field _ -> encode as a normal field Each field will be mapped to the HTTP Message according to the following rules: "body" -> always encoded part of the body as with Content-Disposition type of "inline" _ -> - If the byte size of the value is less than the ?MAX_TAG_VALUE, then encode as a header, also attempting to encode as a structured field. - Otherwise encode the value as a part in the multipart response ## Function Index ##
body_to_parts/3*Split the body into parts, if it is a multipart.
body_to_tabm/2*Generate the body TABM from the body key of the encoded message.
boundary_from_parts/1*Generate a unique, reproducible boundary for the multipart body, however we cannot use the id of the message as the boundary, as the id is not known until the message is encoded.
do_to/3*
encode_body_part/4*Encode a multipart body part to a flat binary.
encode_http_msg/2Encode a HTTP message into a binary.
encode_message_with_links_test/0*
field_to_http/3*All maps are encoded into the body of the HTTP message to be further encoded later.
from/3Convert a HTTP Message into a TABM.
from_body_part/3*Parse a single part of a multipart body into a TABM.
group_ids/1*Group all elements with: 1.
group_maps/1*Merge maps at the same level, if possible.
group_maps/4*
group_maps_flat_compatible_test/0*The grouped maps encoding is a subset of the flat encoding, where on keys with maps values are flattened.
group_maps_test/0*
inline_key/1*given a message, returns a binary tuple: - A list of pairs to add to the msg, if any - the field name for the inlined key.
inline_key/2*
to/3Convert a TABM into an HTTP Message.
to/4*
ungroup_ids/2*Decode the ao-ids key into a map.
## Function Details ## ### body_to_parts/3 * ### `body_to_parts(ContentType, Body, Opts) -> any()` Split the body into parts, if it is a multipart. ### body_to_tabm/2 * ### `body_to_tabm(HTTP, Opts) -> any()` Generate the body TABM from the `body` key of the encoded message. ### boundary_from_parts/1 * ### `boundary_from_parts(PartList) -> any()` Generate a unique, reproducible boundary for the multipart body, however we cannot use the id of the message as the boundary, as the id is not known until the message is encoded. Subsequently, we generate each body part individually, concatenate them, and apply a SHA2-256 hash to the result. This ensures that the boundary is unique, reproducible, and secure. ### do_to/3 * ### `do_to(Binary, FormatOpts, Opts) -> any()` ### encode_body_part/4 * ### `encode_body_part(PartName, BodyPart, InlineKey, Opts) -> any()` Encode a multipart body part to a flat binary. ### encode_http_msg/2 ### `encode_http_msg(Msg, Opts) -> any()` Encode a HTTP message into a binary. ### encode_message_with_links_test/0 * ### `encode_message_with_links_test() -> any()` ### field_to_http/3 * ### `field_to_http(Httpsig, X2, Opts) -> any()` All maps are encoded into the body of the HTTP message to be further encoded later. ### from/3 ### `from(Bin, Req, Opts) -> any()` Convert a HTTP Message into a TABM. HTTP Structured Field is encoded into it's equivalent TABM encoding. ### from_body_part/3 * ### `from_body_part(InlinedKey, Part, Opts) -> any()` Parse a single part of a multipart body into a TABM. ### group_ids/1 * ### `group_ids(Map) -> any()` Group all elements with: 1. A key that ?IS_ID returns true for, and 2. A value that is immediate into a combined SF dict-_like_ structure. If not encoded, these keys would be sent as headers and lower-cased, losing their comparability against the original keys. The structure follows all SF dict rules, except that it allows for keys to contain capitals. The HyperBEAM SF parser will accept these keys, but standard RFC 8741 parsers will not. Subsequently, the resulting `ao-cased` key is not added to the `ao-types` map. ### group_maps/1 * ### `group_maps(Map) -> any()` Merge maps at the same level, if possible. ### group_maps/4 * ### `group_maps(Map, Parent, Top, Opts) -> any()` ### group_maps_flat_compatible_test/0 * ### `group_maps_flat_compatible_test() -> any()` The grouped maps encoding is a subset of the flat encoding, where on keys with maps values are flattened. So despite needing a special encoder to produce it We can simply apply the flat encoder to it to get back the original message. The test asserts that is indeed the case. ### group_maps_test/0 * ### `group_maps_test() -> any()` ### inline_key/1 * ### `inline_key(Msg) -> any()` given a message, returns a binary tuple: - A list of pairs to add to the msg, if any - the field name for the inlined key In order to preserve the field name of the inlined part, an additional field may need to be added ### inline_key/2 * ### `inline_key(Msg, Opts) -> any()` ### to/3 ### `to(TABM, Req, Opts) -> any()` Convert a TABM into an HTTP Message. The HTTP Message is a simple Erlang Map that can translated to a given web server Response API ### to/4 * ### `to(Bin, Req, FormatOpts, Opts) -> any()` ### ungroup_ids/2 * ### `ungroup_ids(Msg, Opts) -> any()` Decode the `ao-ids` key into a map. --- END OF FILE: docs/resources/source-code/dev_codec_httpsig_conv.md --- --- START OF FILE: docs/resources/source-code/dev_codec_httpsig_siginfo.md --- # [Module dev_codec_httpsig_siginfo.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_codec_httpsig_siginfo.erl) A module for converting between commitments and their encoded `signature` and `signature-input` keys. ## Function Index ##
add_derived_specifiers/1Normalize key parameters to ensure their names are correct for inclusion in the signature-input and associated keys.
commitment_to_alg/2*Calculate an alg string from a commitment message, using its commitment-device and optionally, its type key.
commitment_to_device_specifiers/2*Convert an alg to a commitment device.
commitment_to_sf_siginfo/3*Generate a signature and signature-input key pair from a given commitment.
commitment_to_sig_name/1Generate a signature name from a commitment.
commitments_to_siginfo/3Generate a signature and signature-input key pair from a commitment map.
committed_keys_to_siginfo/1Convert committed keys to their siginfo format.
decoding_nested_map_binary/1*
from_siginfo_keys/3Normalize a list of httpsig@1.0 keys to their equivalents in AO-Core format.
get_additional_params/1*
nested_map_to_string/1*
remove_derived_specifiers/1Remove derived specifiers from a list of component identifiers.
sf_siginfo_to_commitment/5*Take a signature and signature-input as parsed structured-fields and return a commitment.
siginfo_to_commitments/3Take a message with a signature and signature-input key pair and return a map of commitments.
to_siginfo_keys/3Normalize a list of AO-Core keys to their equivalents in httpsig@1.0 format.
## Function Details ## ### add_derived_specifiers/1 ### `add_derived_specifiers(ComponentIdentifiers) -> any()` Normalize key parameters to ensure their names are correct for inclusion in the `signature-input` and associated keys. ### commitment_to_alg/2 * ### `commitment_to_alg(Commitment, Opts) -> any()` Calculate an `alg` string from a commitment message, using its `commitment-device` and optionally, its `type` key. ### commitment_to_device_specifiers/2 * ### `commitment_to_device_specifiers(Commitment, Opts) -> any()` Convert an `alg` to a commitment device. If the `alg` has the form of a device specifier (`x@y.z...[/type]`), return the device. Otherwise, we assume that the `alg` is a `type` of the `httpsig@1.0` algorithm. `type` is an optional key that allows for subtyping of the algorithm. When provided, in the `alg` it is parsed and returned as the `type` key in the commitment message. ### commitment_to_sf_siginfo/3 * ### `commitment_to_sf_siginfo(Msg, Commitment, Opts) -> any()` Generate a `signature` and `signature-input` key pair from a given commitment. ### commitment_to_sig_name/1 ### `commitment_to_sig_name(Commitment) -> any()` Generate a signature name from a commitment. The commitment message is not expected to be complete: Only the `commitment-device`, and the `committer` or `keyid` keys are required. ### commitments_to_siginfo/3 ### `commitments_to_siginfo(Msg, Comms, Opts) -> any()` Generate a `signature` and `signature-input` key pair from a commitment map. ### committed_keys_to_siginfo/1 ### `committed_keys_to_siginfo(Msg) -> any()` Convert committed keys to their siginfo format. This involves removing the `body` key from the committed keys, if present, and replacing it with the `content-digest` key. ### decoding_nested_map_binary/1 * ### `decoding_nested_map_binary(Bin) -> any()` ### from_siginfo_keys/3 ### `from_siginfo_keys(HTTPEncMsg, BodyKeys, SigInfoCommitted) -> any()` Normalize a list of `httpsig@1.0` keys to their equivalents in AO-Core format. There are three stages: 1. Remove the @ prefix from the component identifiers, if present. 2. Replace `content-digest` with the body keys, if present. 3. Replace the `body` key again with the value of the `ao-body-key` key, if present. This is possible because the keys derived from the body often contain the `body` key itself. 4. If the `content-type` starts with `multipart/`, we remove it. ### get_additional_params/1 * ### `get_additional_params(Commitment) -> any()` ### nested_map_to_string/1 * ### `nested_map_to_string(Map) -> any()` ### remove_derived_specifiers/1 ### `remove_derived_specifiers(ComponentIdentifiers) -> any()` Remove derived specifiers from a list of component identifiers. ### sf_siginfo_to_commitment/5 * ### `sf_siginfo_to_commitment(Msg, BodyKeys, SFSig, SFSigInput, Opts) -> any()` Take a signature and signature-input as parsed structured-fields and return a commitment. ### siginfo_to_commitments/3 ### `siginfo_to_commitments(Msg, BodyKeys, Opts) -> any()` Take a message with a `signature` and `signature-input` key pair and return a map of commitments. ### to_siginfo_keys/3 ### `to_siginfo_keys(Msg, Commitment, Opts) -> any()` Normalize a list of AO-Core keys to their equivalents in `httpsig@1.0` format. This involves: - If the HTTPSig message given has an `ao-body-key` key and the committed keys list contains it, we replace it in the list with the `body` key and add the `ao-body-key` key. - If the list contains a `body` key, we replace it with the `content-digest` key. - Otherwise, we return the list unchanged. --- END OF FILE: docs/resources/source-code/dev_codec_httpsig_siginfo.md --- --- START OF FILE: docs/resources/source-code/dev_codec_httpsig.md --- # [Module dev_codec_httpsig.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_codec_httpsig.erl) This module implements HTTP Message Signatures as described in RFC-9421 (https://datatracker.ietf.org/doc/html/rfc9421), as an AO-Core device. ## Description ## It implements the codec standard (from/1, to/1), as well as the optional commitment functions (id/3, sign/3, verify/3). The commitment functions are found in this module, while the codec functions are relayed to the `dev_codec_httpsig_conv` module. ## Function Index ##
add_content_digest/2If the body key is present and a binary, replace it with a content-digest.
commit/3Commit to a message using the HTTP-Signature format.
committed_id_test/0*
from/3
key_present/2*Calculate if a key or its +link TABM variant is present in a message.
keys_to_commit/3*Derive the set of keys to commit to from a commit request and a base message.
maybe_bundle_tag_commitment/3*Annotate the commitment with the bundle key if the request contains it.
multicommitted_id_test/0*
normalize_for_encoding/3Given a base message and a commitment, derive the message and commitment normalized for encoding.
opts/1*Generate the Opts to use during AO-Core operations in the codec.
serialize/2A helper utility for creating a direct encoding of a HTTPSig message.
serialize/3
sign_and_verify_link_test/0*Test that we can sign and verify a message with a link.
signature_base/3*create the signature base that will be signed in order to create the Signature and SignatureInput.
signature_components_line/3*Given a list of Component Identifiers and a Request/Response Message context, create the "signature-base-line" portion of the signature base.
signature_params_line/2*construct the "signature-params-line" part of the signature base.
to/3
validate_large_message_from_http_test/0*Ensure that we can validate a signature on an extremely large and complex message that is sent over HTTP, signed with the codec.
verify/3
## Function Details ## ### add_content_digest/2 ### `add_content_digest(Msg, Opts) -> any()` If the `body` key is present and a binary, replace it with a content-digest. ### commit/3 ### `commit(Msg, Req, Opts) -> any()` Commit to a message using the HTTP-Signature format. We use the `type` parameter to determine the type of commitment to use. If the `type` parameter is `signed`, we default to the rsa-pss-sha512 algorithm. If the `type` parameter is `unsigned`, we default to the hmac-sha256 algorithm. ### committed_id_test/0 * ### `committed_id_test() -> any()` ### from/3 ### `from(Msg, Req, Opts) -> any()` ### key_present/2 * ### `key_present(Key, Msg) -> any()` Calculate if a key or its `+link` TABM variant is present in a message. ### keys_to_commit/3 * ### `keys_to_commit(Base, Req, Opts) -> any()` Derive the set of keys to commit to from a `commit` request and a base message. ### maybe_bundle_tag_commitment/3 * ### `maybe_bundle_tag_commitment(Commitment, Req, Opts) -> any()` Annotate the commitment with the `bundle` key if the request contains it. ### multicommitted_id_test/0 * ### `multicommitted_id_test() -> any()` ### normalize_for_encoding/3 ### `normalize_for_encoding(Msg, Commitment, Opts) -> any()` Given a base message and a commitment, derive the message and commitment normalized for encoding. ### opts/1 * ### `opts(RawOpts) -> any()` Generate the `Opts` to use during AO-Core operations in the codec. ### serialize/2 ### `serialize(Msg, Opts) -> any()` A helper utility for creating a direct encoding of a HTTPSig message. This function supports two modes of operation: 1. `format: binary`, yielding a raw binary HTTP/1.1-style response that can either be stored or emitted raw accross a transport medium. 2. `format: components`, yielding a message containing `headers` and `body` keys, suitable for use in connecting to HTTP-response flows implemented by other servers. Optionally, the `index` key can be set to override resolution of the default index page into HTTP responses that do not contain their own `body` field. ### serialize/3 ### `serialize(Msg, Req, Opts) -> any()` ### sign_and_verify_link_test/0 * ### `sign_and_verify_link_test() -> any()` Test that we can sign and verify a message with a link. We use ### signature_base/3 * ### `signature_base(EncodedMsg, Commitment, Opts) -> any()` create the signature base that will be signed in order to create the Signature and SignatureInput. This implements a portion of RFC-9421 see: https://datatracker.ietf.org/doc/html/rfc9421#name-creating-the-signature-base ### signature_components_line/3 * ### `signature_components_line(Req, Commitment, Opts) -> any()` Given a list of Component Identifiers and a Request/Response Message context, create the "signature-base-line" portion of the signature base ### signature_params_line/2 * ### `signature_params_line(RawCommitment, Opts) -> any()` construct the "signature-params-line" part of the signature base. See https://datatracker.ietf.org/doc/html/rfc9421#section-2.5-7.3.2.4 ### to/3 ### `to(Msg, Req, Opts) -> any()` ### validate_large_message_from_http_test/0 * ### `validate_large_message_from_http_test() -> any()` Ensure that we can validate a signature on an extremely large and complex message that is sent over HTTP, signed with the codec. ### verify/3 ### `verify(Base, Req, RawOpts) -> any()` --- END OF FILE: docs/resources/source-code/dev_codec_httpsig.md --- --- START OF FILE: docs/resources/source-code/dev_codec_json.md --- # [Module dev_codec_json.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_codec_json.erl) A simple JSON codec for HyperBEAM's message format. ## Description ## Takes a message as TABM and returns an encoded JSON string representation. This codec utilizes the httpsig@1.0 codec for signing and verifying. ## Function Index ##
commit/3
committed/3
content_type/1Return the content type for the codec.
deserialize/3Deserialize the JSON string found at the given path.
from/3Decode a JSON string to a message.
serialize/3Serialize a message to a JSON string.
to/3Encode a message to a JSON string.
verify/3
## Function Details ## ### commit/3 ### `commit(Msg, Req, Opts) -> any()` ### committed/3 ### `committed(Msg, Req, Opts) -> any()` ### content_type/1 ### `content_type(X1) -> any()` Return the content type for the codec. ### deserialize/3 ### `deserialize(Base, Req, Opts) -> any()` Deserialize the JSON string found at the given path. ### from/3 ### `from(Map, Req, Opts) -> any()` Decode a JSON string to a message. ### serialize/3 ### `serialize(Base, Msg, Opts) -> any()` Serialize a message to a JSON string. ### to/3 ### `to(Msg, Req, Opts) -> any()` Encode a message to a JSON string. ### verify/3 ### `verify(Msg, Req, Opts) -> any()` --- END OF FILE: docs/resources/source-code/dev_codec_json.md --- --- START OF FILE: docs/resources/source-code/dev_codec_structured.md --- # [Module dev_codec_structured.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_codec_structured.erl) A device implementing the codec interface (to/1, from/1) for HyperBEAM's internal, richly typed message format. ## Description ## This format mirrors HTTP Structured Fields, aside from its limitations of compound type depths, as well as limited floating point representations. As with all AO-Core codecs, its target format (the format it expects to receive in the `to/1` function, and give in `from/1`) is TABM. For more details, see the HTTP Structured Fields (RFC-9651) specification. ## Function Index ##
commit/3
decode_ao_types/2Parse the ao-types field of a TABM and return a map of keys and their types.
decode_value/2Convert non-binary values to binary for serialization.
encode_ao_types/2Generate an ao-types structured field from a map of keys and their types.
encode_value/1Convert a term to a binary representation, emitting its type for serialization as a separate tag.
from/3Convert a rich message into a 'Type-Annotated-Binary-Message' (TABM).
implicit_keys/1*Find the implicit keys of a TABM.
implicit_keys/2
is_list_from_ao_types/2Determine if the ao-types field of a TABM indicates that the message is a list.
linkify_mode/2*Discern the linkify mode from the request and the options.
list_encoding_test/0*
to/3Convert a TABM into a native HyperBEAM message.
verify/3
## Function Details ## ### commit/3 ### `commit(Msg, Req, Opts) -> any()` ### decode_ao_types/2 ### `decode_ao_types(Msg, Opts) -> any()` Parse the `ao-types` field of a TABM and return a map of keys and their types ### decode_value/2 ### `decode_value(Type, Value) -> any()` Convert non-binary values to binary for serialization. ### encode_ao_types/2 ### `encode_ao_types(Types, Opts) -> any()` Generate an `ao-types` structured field from a map of keys and their types. ### encode_value/1 ### `encode_value(Value) -> any()` Convert a term to a binary representation, emitting its type for serialization as a separate tag. ### from/3 ### `from(Bin, Req, Opts) -> any()` Convert a rich message into a 'Type-Annotated-Binary-Message' (TABM). ### implicit_keys/1 * ### `implicit_keys(Req) -> any()` Find the implicit keys of a TABM. ### implicit_keys/2 ### `implicit_keys(Req, Opts) -> any()` ### is_list_from_ao_types/2 ### `is_list_from_ao_types(Types, Opts) -> any()` Determine if the `ao-types` field of a TABM indicates that the message is a list. ### linkify_mode/2 * ### `linkify_mode(Req, Opts) -> any()` Discern the linkify mode from the request and the options. ### list_encoding_test/0 * ### `list_encoding_test() -> any()` ### to/3 ### `to(Bin, Req, Opts) -> any()` Convert a TABM into a native HyperBEAM message. ### verify/3 ### `verify(Msg, Req, Opts) -> any()` --- END OF FILE: docs/resources/source-code/dev_codec_structured.md --- --- START OF FILE: docs/resources/source-code/dev_cron.md --- # [Module dev_cron.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_cron.erl) A device that inserts new messages into the schedule to allow processes to passively 'call' themselves without user interaction. ## Function Index ##
every/3Exported function for scheduling a recurring message.
every_worker_loop/4*
every_worker_loop_test/0*This test verifies that a recurring task can be scheduled and executed.
info/1Exported function for getting device info.
info/3
once/3Exported function for scheduling a one-time message.
once_executed_test/0*This test verifies that a one-time task can be scheduled and executed.
once_worker/3*Internal function for scheduling a one-time message.
parse_time/1*Parse a time string into milliseconds.
stop/3Exported function for stopping a scheduled task.
stop_every_test/0*This test verifies that a recurring task can be stopped by calling the stop function with the task ID.
stop_once_test/0*
test_worker/0*This is a helper function that is used to test the cron device.
test_worker/1*
## Function Details ## ### every/3 ### `every(Msg1, Msg2, Opts) -> any()` Exported function for scheduling a recurring message. ### every_worker_loop/4 * ### `every_worker_loop(CronPath, Req, Opts, IntervalMillis) -> any()` ### every_worker_loop_test/0 * ### `every_worker_loop_test() -> any()` This test verifies that a recurring task can be scheduled and executed. ### info/1 ### `info(X1) -> any()` Exported function for getting device info. ### info/3 ### `info(Msg1, Msg2, Opts) -> any()` ### once/3 ### `once(Msg1, Msg2, Opts) -> any()` Exported function for scheduling a one-time message. ### once_executed_test/0 * ### `once_executed_test() -> any()` This test verifies that a one-time task can be scheduled and executed. ### once_worker/3 * ### `once_worker(Path, Req, Opts) -> any()` Internal function for scheduling a one-time message. ### parse_time/1 * ### `parse_time(BinString) -> any()` Parse a time string into milliseconds. ### stop/3 ### `stop(Msg1, Msg2, Opts) -> any()` Exported function for stopping a scheduled task. ### stop_every_test/0 * ### `stop_every_test() -> any()` This test verifies that a recurring task can be stopped by calling the stop function with the task ID. ### stop_once_test/0 * ### `stop_once_test() -> any()` ### test_worker/0 * ### `test_worker() -> any()` This is a helper function that is used to test the cron device. It is used to increment a counter and update the state of the worker. ### test_worker/1 * ### `test_worker(State) -> any()` --- END OF FILE: docs/resources/source-code/dev_cron.md --- --- START OF FILE: docs/resources/source-code/dev_cu.md --- # [Module dev_cu.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_cu.erl) ## Function Index ##
execute/2
push/2
## Function Details ## ### execute/2 ### `execute(CarrierMsg, S) -> any()` ### push/2 ### `push(Msg, S) -> any()` --- END OF FILE: docs/resources/source-code/dev_cu.md --- --- START OF FILE: docs/resources/source-code/dev_dedup.md --- # [Module dev_dedup.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_dedup.erl) A device that deduplicates messages in an evaluation stream, returning status `skip` if the message has already been seen. ## Description ## This device is typically used to ensure that a message is only executed once, even if assigned multiple times, upon a `~process@1.0` evaluation. It can, however, be used in many other contexts. This device honors the `pass` key if it is present in the message. If so, it will only run on the first pass. Additionally, the device supports a `subject-key` key that allows the caller to specify the key whose ID should be used for deduplication. If the `subject-key` key is not present, the device will use the `body` of the request as the subject. If the key is set to `request`, the device will use the entire request itself as the subject. This device runs on the first pass of the `compute` key call if executed in a stack, and not in subsequent passes. Currently the device stores its list of already seen items in memory, but at some point it will likely make sense to drop them in the cache. ## Function Index ##
dedup_test/0*
dedup_with_multipass_test/0*
handle/4*Forward the keys and set functions to the message device, handle all others with deduplication.
info/1
## Function Details ## ### dedup_test/0 * ### `dedup_test() -> any()` ### dedup_with_multipass_test/0 * ### `dedup_with_multipass_test() -> any()` ### handle/4 * ### `handle(Key, M1, M2, Opts) -> any()` Forward the keys and `set` functions to the message device, handle all others with deduplication. This allows the device to be used in any context where a key is called. If the `dedup-key` ### info/1 ### `info(M1) -> any()` --- END OF FILE: docs/resources/source-code/dev_dedup.md --- --- START OF FILE: docs/resources/source-code/dev_delegated_compute.md --- # [Module dev_delegated_compute.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_delegated_compute.erl) Simple wrapper module that enables compute on remote machines, implementing the JSON-Iface. ## Description ## This can be used either as a standalone, to bring trusted results into the local node, or as the `Execution-Device` of an AO process. ## Function Index ##
compute/3
do_compute/3*Execute computation on a remote machine via relay and the JSON-Iface.
init/3Initialize or normalize the compute-lite device.
normalize/3
snapshot/3
## Function Details ## ### compute/3 ### `compute(Msg1, Msg2, Opts) -> any()` ### do_compute/3 * ### `do_compute(ProcID, Msg2, Opts) -> any()` Execute computation on a remote machine via relay and the JSON-Iface. ### init/3 ### `init(Msg1, Msg2, Opts) -> any()` Initialize or normalize the compute-lite device. For now, we don't need to do anything special here. ### normalize/3 ### `normalize(Msg1, Msg2, Opts) -> any()` ### snapshot/3 ### `snapshot(Msg1, Msg2, Opts) -> any()` --- END OF FILE: docs/resources/source-code/dev_delegated_compute.md --- --- START OF FILE: docs/resources/source-code/dev_faff.md --- # [Module dev_faff.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_faff.erl) A module that implements a 'friends and family' pricing policy. ## Description ## It will allow users to process requests only if their addresses are in the allow-list for the node. Fundamentally against the spirit of permissionlessness, but it is useful if you are running a node for your own purposes and would not like to allow others to make use of it -- even for a fee. It also serves as a useful example of how to implement a custom pricing policy, as it implements stubs for both the pricing and ledger P4 APIs. ## Function Index ##
charge/3Charge the user's account if the request is allowed.
estimate/3Decide whether or not to service a request from a given address.
is_admissible/2*Check whether all of the signers of the request are in the allow-list.
## Function Details ## ### charge/3 ### `charge(X1, Req, NodeMsg) -> any()` Charge the user's account if the request is allowed. ### estimate/3 ### `estimate(X1, Msg, NodeMsg) -> any()` Decide whether or not to service a request from a given address. ### is_admissible/2 * ### `is_admissible(Msg, NodeMsg) -> any()` Check whether all of the signers of the request are in the allow-list. --- END OF FILE: docs/resources/source-code/dev_faff.md --- --- START OF FILE: docs/resources/source-code/dev_genesis_wasm.md --- # [Module dev_genesis_wasm.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_genesis_wasm.erl) A device that mimics an environment suitable for `legacynet` AO processes, using HyperBEAM infrastructure. ## Description ## This allows existing `legacynet` AO process definitions to be used in HyperBEAM. ## Function Index ##
collect_events/1*Collect events from the port and log them.
collect_events/2*
compute/3All the delegated-compute@1.0 device to execute the request.
ensure_started/1*Ensure the local genesis-wasm@1.0 is live.
init/3Initialize the device.
is_genesis_wasm_server_running/1*Check if the genesis-wasm server is running, using the cached process ID if available.
log_server_events/1*Log lines of output from the genesis-wasm server.
normalize/3Normalize the device.
snapshot/3Snapshot the device.
status/1*Check if the genesis-wasm server is running by requesting its status endpoint.
## Function Details ## ### collect_events/1 * ### `collect_events(Port) -> any()` Collect events from the port and log them. ### collect_events/2 * ### `collect_events(Port, Acc) -> any()` ### compute/3 ### `compute(Msg, Msg2, Opts) -> any()` All the `delegated-compute@1.0` device to execute the request. We then apply the `patch@1.0` device, applying any state patches that the AO process may have requested. ### ensure_started/1 * ### `ensure_started(Opts) -> any()` Ensure the local `genesis-wasm@1.0` is live. If it not, start it. ### init/3 ### `init(Msg, Msg2, Opts) -> any()` Initialize the device. ### is_genesis_wasm_server_running/1 * ### `is_genesis_wasm_server_running(Opts) -> any()` Check if the genesis-wasm server is running, using the cached process ID if available. ### log_server_events/1 * ### `log_server_events(Bin) -> any()` Log lines of output from the genesis-wasm server. ### normalize/3 ### `normalize(Msg, Msg2, Opts) -> any()` Normalize the device. ### snapshot/3 ### `snapshot(Msg, Msg2, Opts) -> any()` Snapshot the device. ### status/1 * ### `status(Opts) -> any()` Check if the genesis-wasm server is running by requesting its status endpoint. --- END OF FILE: docs/resources/source-code/dev_genesis_wasm.md --- --- START OF FILE: docs/resources/source-code/dev_green_zone.md --- # [Module dev_green_zone.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_green_zone.erl) The green zone device, which provides secure communication and identity management between trusted nodes. ## Description ## It handles node initialization, joining existing green zones, key exchange, and node identity cloning. All operations are protected by hardware commitment and encryption. ## Function Index ##
add_trusted_node/4*Adds a node to the trusted nodes list with its commitment report.
become/3Clones the identity of a target node in the green zone.
calculate_node_message/3*Generate the node message that should be set prior to joining a green zone.
decrypt_zone_key/2*Decrypts an AES key using the node's RSA private key.
default_zone_required_opts/1*Provides the default required options for a green zone.
encrypt_payload/2*Encrypts an AES key with a node's RSA public key.
finalize_become/5*
info/1Controls which functions are exposed via the device API.
info/3Provides information about the green zone device and its API.
init/3Initialize the green zone for a node.
is_trusted/3Returns true if the request is signed by a trusted node.
join/3Initiates the join process for a node to enter an existing green zone.
join_peer/5*Processes a join request to a specific peer node.
key/3Encrypts and provides the node's private key for secure sharing.
maybe_set_zone_opts/4*Adopts configuration from a peer when joining a green zone.
rsa_wallet_integration_test/0*Test RSA operations with the existing wallet structure.
try_mount_encrypted_volume/2*Attempts to mount an encrypted volume using the green zone AES key.
validate_join/3*Validates an incoming join request from another node.
validate_peer_opts/2*Validates that a peer's configuration matches required options.
## Function Details ## ### add_trusted_node/4 * ###

add_trusted_node(NodeAddr::binary(), Report::map(), RequesterPubKey::term(), Opts::map()) -> ok

`NodeAddr`: The joining node's address
`Report`: The commitment report provided by the joining node
`RequesterPubKey`: The joining node's public key
`Opts`: A map of configuration options
returns: ok Adds a node to the trusted nodes list with its commitment report. This function updates the trusted nodes configuration: 1. Retrieves the current trusted nodes map 2. Adds the new node with its report and public key 3. Updates the node configuration with the new trusted nodes list ### become/3 ###

become(M1::term(), M2::term(), Opts::map()) -> {ok, map()} | {error, binary()}

`Opts`: A map of configuration options
returns: `{ok, Map}` on success with confirmation details, or `{error, Binary}` if the node is not part of a green zone or identity adoption fails. Clones the identity of a target node in the green zone. This function performs the following operations: 1. Retrieves target node location and ID from the configuration 2. Verifies that the local node has a valid shared AES key 3. Requests the target node's encrypted key via its key endpoint 4. Verifies the response is from the expected peer 5. Decrypts the target node's private key using the shared AES key 6. Updates the local node's wallet with the target node's identity Required configuration in Opts map: - green_zone_peer_location: Target node's address - green_zone_peer_id: Target node's unique identifier - priv_green_zone_aes: The shared AES key for the green zone ### calculate_node_message/3 * ### `calculate_node_message(RequiredOpts, Req, List) -> any()` Generate the node message that should be set prior to joining a green zone. This function takes a required opts message, a request message, and an `adopt-config` value. The `adopt-config` value can be a boolean, a list of fields that should be included in the node message from the request, or a binary string of fields to include, separated by commas. ### decrypt_zone_key/2 * ###

decrypt_zone_key(EncZoneKey::binary(), Opts::map()) -> {ok, binary()} | {error, binary()}

`EncZoneKey`: The encrypted zone AES key (Base64 encoded or binary)
`Opts`: A map of configuration options
returns: {ok, DecryptedKey} on success with the decrypted AES key Decrypts an AES key using the node's RSA private key. This function handles decryption of the zone key: 1. Decodes the encrypted key if it's in Base64 format 2. Extracts the RSA private key components from the wallet 3. Creates an RSA private key record 4. Performs private key decryption on the encrypted key ### default_zone_required_opts/1 * ###

default_zone_required_opts(Opts::map()) -> map()

`Opts`: A map of configuration options from which to derive defaults
returns: A map of required configuration options for the green zone Provides the default required options for a green zone. This function defines the baseline security requirements for nodes in a green zone: 1. Restricts loading of remote devices and only allows trusted signers 2. Limits to preloaded devices from the initiating machine 3. Enforces specific store configuration 4. Prevents route changes from the defaults 5. Requires matching hooks across all peers 6. Disables message scheduling to prevent conflicts 7. Enforces a permanent state to prevent further configuration changes ### encrypt_payload/2 * ###

encrypt_payload(AESKey::binary(), RequesterPubKey::term()) -> binary()

`AESKey`: The shared AES key (256-bit binary)
`RequesterPubKey`: The node's public RSA key
returns: The encrypted AES key Encrypts an AES key with a node's RSA public key. This function securely encrypts the shared key for transmission: 1. Extracts the RSA public key components 2. Creates an RSA public key record 3. Performs public key encryption on the AES key ### finalize_become/5 * ### `finalize_become(KeyResp, NodeLocation, NodeID, GreenZoneAES, Opts) -> any()` ### info/1 ### `info(X1) -> any()` Controls which functions are exposed via the device API. This function defines the security boundary for the green zone device by explicitly listing which functions are available through the API. ### info/3 ### `info(Msg1, Msg2, Opts) -> any()` Provides information about the green zone device and its API. This function returns detailed documentation about the device, including: 1. A high-level description of the device's purpose 2. Version information 3. Available API endpoints with their parameters and descriptions ### init/3 ###

init(M1::term(), M2::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Opts`: A map of configuration options
returns: `{ok, Binary}` on success with confirmation message, or `{error, Binary}` on failure with error message. Initialize the green zone for a node. This function performs the following operations: 1. Validates the node's history to ensure this is a valid initialization 2. Retrieves or creates a required configuration for the green zone 3. Ensures a wallet (keypair) exists or creates a new one 4. Generates a new 256-bit AES key for secure communication 5. Updates the node's configuration with these cryptographic identities Config options in Opts map: - green_zone_required_config: (Optional) Custom configuration requirements - priv_wallet: (Optional) Existing wallet to use instead of creating a new one - priv_green_zone_aes: (Optional) Existing AES key, if already part of a zone ### is_trusted/3 ### `is_trusted(M1, Req, Opts) -> any()` Returns `true` if the request is signed by a trusted node. ### join/3 ###

join(M1::term(), M2::term(), Opts::map()) -> {ok, map()} | {error, binary()}

`M1`: The join request message with target peer information
`M2`: Additional request details, may include adoption preferences
`Opts`: A map of configuration options for join operations
returns: `{ok, Map}` on success with join response details, or `{error, Binary}` on failure with error message. Initiates the join process for a node to enter an existing green zone. This function performs the following operations depending on the state: 1. Validates the node's history to ensure proper initialization 2. Checks for target peer information (location and ID) 3. If target peer is specified: a. Generates a commitment report for the peer b. Prepares and sends a POST request to the target peer c. Verifies the response and decrypts the returned zone key d. Updates local configuration with the shared AES key 4. If no peer is specified, processes the join request locally Config options in Opts map: - green_zone_peer_location: Target peer's address - green_zone_peer_id: Target peer's unique identifier - green_zone_adopt_config: (Optional) Whether to adopt peer's configuration (default: true) ### join_peer/5 * ###

join_peer(PeerLocation::binary(), PeerID::binary(), M1::term(), M2::term(), Opts::map()) -> {ok, map()} | {error, map() | binary()}

`PeerLocation`: The target peer's address
`PeerID`: The target peer's unique identifier
`M2`: May contain ShouldMount flag to enable encrypted volume mounting
returns: `{ok, Map}` on success with confirmation message, or `{error, Map|Binary}` on failure with error details Processes a join request to a specific peer node. This function handles the client-side join flow when connecting to a peer: 1. Verifies the node is not already in a green zone 2. Optionally adopts configuration from the target peer 3. Generates a hardware-backed commitment report 4. Sends a POST request to the peer's join endpoint 5. Verifies the response signature 6. Decrypts the returned AES key 7. Updates local configuration with the shared key 8. Optionally mounts an encrypted volume using the shared key ### key/3 ###

key(M1::term(), M2::term(), Opts::map()) -> {ok, map()} | {error, binary()}

`Opts`: A map of configuration options
returns: `{ok, Map}` containing the encrypted key and IV on success, or `{error, Binary}` if the node is not part of a green zone Encrypts and provides the node's private key for secure sharing. This function performs the following operations: 1. Retrieves the shared AES key and the node's wallet 2. Verifies that the node is part of a green zone (has a shared AES key) 3. Generates a random initialization vector (IV) for encryption 4. Encrypts the node's private key using AES-256-GCM with the shared key 5. Returns the encrypted key and IV for secure transmission Required configuration in Opts map: - priv_green_zone_aes: The shared AES key for the green zone - priv_wallet: The node's wallet containing the private key to encrypt ### maybe_set_zone_opts/4 * ###

maybe_set_zone_opts(PeerLocation::binary(), PeerID::binary(), Req::map(), InitOpts::map()) -> {ok, map()} | {error, binary()}

`PeerLocation`: The location of the peer node to join
`PeerID`: The ID of the peer node to join
`Req`: The request message with adoption preferences
`InitOpts`: A map of initial configuration options
returns: `{ok, Map}` with updated configuration on success, or `{error, Binary}` if configuration retrieval fails Adopts configuration from a peer when joining a green zone. This function handles the conditional adoption of peer configuration: 1. Checks if adoption is enabled (default: true) 2. Requests required configuration from the peer 3. Verifies the authenticity of the configuration 4. Creates a node message with appropriate settings 5. Updates the local node configuration Config options: - green_zone_adopt_config: Controls configuration adoption (boolean, list, or binary) ### rsa_wallet_integration_test/0 * ### `rsa_wallet_integration_test() -> any()` Test RSA operations with the existing wallet structure. This test function verifies that encryption and decryption using the RSA keys from the wallet work correctly. It creates a new wallet, encrypts a test message with the RSA public key, and then decrypts it with the RSA private key, asserting that the decrypted message matches the original. ### try_mount_encrypted_volume/2 * ### `try_mount_encrypted_volume(AESKey, Opts) -> any()` Attempts to mount an encrypted volume using the green zone AES key. This function handles the complete process of secure storage setup by delegating to the dev_volume module, which provides a unified interface for volume management. The encryption key used for the volume is the same AES key used for green zone communication, ensuring that only nodes in the green zone can access the data. ### validate_join/3 * ###

validate_join(M1::term(), Req::map(), Opts::map()) -> {ok, map()} | {error, binary()}

`M1`: Ignored parameter
`Req`: The join request containing commitment report and public key
`Opts`: A map of configuration options
returns: `{ok, Map}` on success with encrypted AES key, or `{error, Binary}` on failure with error message Validates an incoming join request from another node. This function handles the server-side join flow when receiving a connection request: 1. Validates the peer's configuration meets required standards 2. Extracts the commitment report and public key from the request 3. Verifies the hardware-backed commitment report 4. Adds the joining node to the trusted nodes list 5. Encrypts the shared AES key with the peer's public key 6. Returns the encrypted key to the requesting node ### validate_peer_opts/2 * ###

validate_peer_opts(Req::map(), Opts::map()) -> boolean()

`Req`: The request message containing the peer's configuration
`Opts`: A map of the local node's configuration options
returns: true if the peer's configuration is valid, false otherwise Validates that a peer's configuration matches required options. This function ensures the peer node meets configuration requirements: 1. Retrieves the local node's required configuration 2. Gets the peer's options from its message 3. Adds required configuration to peer's required options list 4. Verifies the peer's node history is valid 5. Checks that the peer's options match the required configuration --- END OF FILE: docs/resources/source-code/dev_green_zone.md --- --- START OF FILE: docs/resources/source-code/dev_hook.md --- # [Module dev_hook.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_hook.erl) A generalized interface for `hooking` into HyperBEAM nodes. ## Description ## This module allows users to define `hooks` that are executed at various points in the lifecycle of nodes and message evaluations. Hooks are maintained in the `node message` options, under the key `on` key. Each `hook` may have zero or many `handlers` which their request is executed against. A new `handler` of a hook can be registered by simply adding a new key to that message. If multiple hooks need to be executed for a single event, the key's value can be set to a list of hooks. `hook`s themselves do not need to be added explicitly. Any device can add a hook by simply executing `dev_hook:on(HookName, Req, Opts)`. This function is does not affect the hashpath of a message and is not exported on the device`s API, such that it is not possible to call it directly with AO-Core resolution. All handlers are expressed in the form of a message, upon which the hook's request is evaluated: AO(HookMsg, Req, Opts) => {Status, Result} The `Status` and `Result` of the evaluation can be used at the `hook` caller's discretion. If multiple handlers are to be executed for a single `hook`, the result of each is used as the input to the next, on the assumption that the status of the previous is `ok`. If a non-`ok` status is encountered, the evaluation is halted and the result is returned to the caller. This means that in most cases, hooks take the form of chainable pipelines of functions, passing the most pertinent data in the `body` key of both the request and result. Hook definitions can also set the `hook/result` key to `ignore`, if the result of the execution should be discarded and the prior value (the input to the hook) should be used instead. The `hook/commit-request` key can also be set to `true` if the request should be committed by the node before execution of the hook. The default HyperBEAM node implements several useful hooks. They include: start: Executed when the node starts. Req/body: The node's initial configuration. Result/body: The node's possibly updated configuration. request: Executed when a request is received via the HTTP API. Req/body: The sequence of messages that the node will evaluate. Req/request: The raw, unparsed singleton request. Result/body: The sequence of messages that the node will evaluate. step: Executed after each message in a sequence has been evaluated. Req/body: The result of the evaluation. Result/body: The result of the evaluation. response: Executed when a response is sent via the HTTP API. Req/body: The result of the evaluation. Req/request: The raw, unparsed singleton request that was used to generate the response. Result/body: The message to be sent in response to the request. Additionally, this module implements a traditional device API, allowing the node operator to register hooks to the node and find those that are currently active. ## Function Index ##
execute_handler/4*Execute a single handler Handlers are expressed as messages that can be resolved via AO.
execute_handlers/4*Execute a list of handlers in sequence.
find/2Get all handlers for a specific hook from the node message options.
find/3
halt_on_error_test/0*Test that pipeline execution halts on error.
info/1Device API information.
multiple_handlers_test/0*Test that multiple handlers form a pipeline.
no_handlers_test/0*Test that hooks with no handlers return the original request.
on/3Execute a named hook with the provided request and options This function finds all handlers for the hook and evaluates them in sequence.
single_handler_test/0*Test that a single handler is executed correctly.
## Function Details ## ### execute_handler/4 * ### `execute_handler(HookName, Handler, Req, Opts) -> any()` Execute a single handler Handlers are expressed as messages that can be resolved via AO. ### execute_handlers/4 * ### `execute_handlers(HookName, Rest, Req, Opts) -> any()` Execute a list of handlers in sequence. The result of each handler is used as input to the next handler. If a handler returns a non-ok status, execution is halted. ### find/2 ### `find(HookName, Opts) -> any()` Get all handlers for a specific hook from the node message options. Handlers are stored in the `on` key of this message. The `find/2` variant of this function only takes a hook name and node message, and is not called directly via the device API. Instead it is used by `on/3` and other internal functionality to find handlers when necessary. The `find/3` variant can, however, be called directly via the device API. ### find/3 ### `find(Base, Req, Opts) -> any()` ### halt_on_error_test/0 * ### `halt_on_error_test() -> any()` Test that pipeline execution halts on error ### info/1 ### `info(X1) -> any()` Device API information ### multiple_handlers_test/0 * ### `multiple_handlers_test() -> any()` Test that multiple handlers form a pipeline ### no_handlers_test/0 * ### `no_handlers_test() -> any()` Test that hooks with no handlers return the original request ### on/3 ### `on(HookName, Req, Opts) -> any()` Execute a named hook with the provided request and options This function finds all handlers for the hook and evaluates them in sequence. The result of each handler is used as input to the next handler. ### single_handler_test/0 * ### `single_handler_test() -> any()` Test that a single handler is executed correctly --- END OF FILE: docs/resources/source-code/dev_hook.md --- --- START OF FILE: docs/resources/source-code/dev_hyperbuddy.md --- # [Module dev_hyperbuddy.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_hyperbuddy.erl) A device that renders a REPL-like interface for AO-Core via HTML. ## Function Index ##
events/3Return the current event counters as a message.
format/3Employ HyperBEAM's internal pretty printer to format a message.
info/0Export an explicit list of files via http.
metrics/3The main HTML page for the REPL device.
return_file/1*Read a file from disk and serve it as a static HTML page.
return_file/2
serve/4*Serve a file from the priv directory.
## Function Details ## ### events/3 ### `events(X1, Req, Opts) -> any()` Return the current event counters as a message. ### format/3 ### `format(Base, Req, Opts) -> any()` Employ HyperBEAM's internal pretty printer to format a message. ### info/0 ### `info() -> any()` Export an explicit list of files via http. ### metrics/3 ### `metrics(X1, Req, Opts) -> any()` The main HTML page for the REPL device. ### return_file/1 * ### `return_file(Name) -> any()` Read a file from disk and serve it as a static HTML page. ### return_file/2 ### `return_file(Device, Name) -> any()` ### serve/4 * ### `serve(Key, M1, M2, Opts) -> any()` Serve a file from the priv directory. Only serves files that are explicitly listed in the `routes` field of the `info/0` return value. --- END OF FILE: docs/resources/source-code/dev_hyperbuddy.md --- --- START OF FILE: docs/resources/source-code/dev_json_iface.md --- # [Module dev_json_iface.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_json_iface.erl) A device that provides a way for WASM execution to interact with the HyperBEAM (and AO) systems, using JSON as a shared data representation. ## Description ## The interface is easy to use. It works as follows: 1. The device is given a message that contains a process definition, WASM environment, and a message that contains the data to be processed, including the image to be used in part of `execute{pass=1}`. 2. The device is called with `execute{pass=2}`, which reads the result of the process execution from the WASM environment and adds it to the message. The device has the following requirements and interface: ``` M1/Computed when /Pass == 1 -> Assumes: M1/priv/wasm/instance M1/Process M2/Message M2/Assignment/Block-Height Generates: /wasm/handler /wasm/params Side-effects: Writes the process and message as JSON representations into the WASM environment. M1/Computed when M2/Pass == 2 -> Assumes: M1/priv/wasm/instance M2/Results M2/Process Generates: /Results/Outbox /Results/Data ``` ## Function Index ##
aos_stack_benchmark_test_/0*
basic_aos_call_test_/0*
compute/3On first pass prepare the call, on second pass get the results.
denormalize_message/2*Normalize a message for AOS-compatibility.
env_read/3*Read the results out of the execution environment.
env_write/5*Write the message and process into the execution environment.
generate_aos_msg/2
generate_stack/1
generate_stack/2
header_case_string/1*
init/3Initialize the device.
json_to_message/2Translates a compute result -- either from a WASM execution using the JSON-Iface, or from a Legacy CU -- and transforms it into a result message.
maybe_list_to_binary/1*
message_to_json_struct/2
message_to_json_struct/3*
normalize_results/1*Normalize the results of an evaluation.
postprocess_outbox/3*Post-process messages in the outbox to add the correct from-process and from-image tags.
prep_call/3*Prepare the WASM environment for execution by writing the process string and the message as JSON representations into the WASM environment.
prepare_header_case_tags/2*Convert a message without an original-tags field into a list of key-value pairs, with the keys in HTTP header-case.
prepare_tags/2*Prepare the tags of a message as a key-value list, for use in the construction of the JSON-Struct message.
preprocess_results/2*After the process returns messages from an evaluation, the signing node needs to add some tags to each message and spawn such that the target process knows these messages are created by a process.
results/3*Read the computed results out of the WASM environment, assuming that the environment has been set up by prep_call/3 and that the WASM executor has been called with computed{pass=1}.
safe_to_id/1*
tags_to_map/2*Convert a message with tags into a map of their key-value pairs.
test_init/0*
## Function Details ## ### aos_stack_benchmark_test_/0 * ### `aos_stack_benchmark_test_() -> any()` ### basic_aos_call_test_/0 * ### `basic_aos_call_test_() -> any()` ### compute/3 ### `compute(M1, M2, Opts) -> any()` On first pass prepare the call, on second pass get the results. ### denormalize_message/2 * ### `denormalize_message(Message, Opts) -> any()` Normalize a message for AOS-compatibility. ### env_read/3 * ### `env_read(M1, M2, Opts) -> any()` Read the results out of the execution environment. ### env_write/5 * ### `env_write(ProcessStr, MsgStr, Base, Req, Opts) -> any()` Write the message and process into the execution environment. ### generate_aos_msg/2 ### `generate_aos_msg(ProcID, Code) -> any()` ### generate_stack/1 ### `generate_stack(File) -> any()` ### generate_stack/2 ### `generate_stack(File, Mode) -> any()` ### header_case_string/1 * ### `header_case_string(Key) -> any()` ### init/3 ### `init(M1, M2, Opts) -> any()` Initialize the device. ### json_to_message/2 ### `json_to_message(JSON, Opts) -> any()` Translates a compute result -- either from a WASM execution using the JSON-Iface, or from a `Legacy` CU -- and transforms it into a result message. ### maybe_list_to_binary/1 * ### `maybe_list_to_binary(List) -> any()` ### message_to_json_struct/2 ### `message_to_json_struct(RawMsg, Opts) -> any()` ### message_to_json_struct/3 * ### `message_to_json_struct(RawMsg, Features, Opts) -> any()` ### normalize_results/1 * ### `normalize_results(Msg) -> any()` Normalize the results of an evaluation. ### postprocess_outbox/3 * ### `postprocess_outbox(Msg, Proc, Opts) -> any()` Post-process messages in the outbox to add the correct `from-process` and `from-image` tags. ### prep_call/3 * ### `prep_call(RawM1, RawM2, Opts) -> any()` Prepare the WASM environment for execution by writing the process string and the message as JSON representations into the WASM environment. ### prepare_header_case_tags/2 * ### `prepare_header_case_tags(TABM, Opts) -> any()` Convert a message without an `original-tags` field into a list of key-value pairs, with the keys in HTTP header-case. ### prepare_tags/2 * ### `prepare_tags(Msg, Opts) -> any()` Prepare the tags of a message as a key-value list, for use in the construction of the JSON-Struct message. ### preprocess_results/2 * ### `preprocess_results(Msg, Opts) -> any()` After the process returns messages from an evaluation, the signing node needs to add some tags to each message and spawn such that the target process knows these messages are created by a process. ### results/3 * ### `results(M1, M2, Opts) -> any()` Read the computed results out of the WASM environment, assuming that the environment has been set up by `prep_call/3` and that the WASM executor has been called with `computed{pass=1}`. ### safe_to_id/1 * ### `safe_to_id(ID) -> any()` ### tags_to_map/2 * ### `tags_to_map(Msg, Opts) -> any()` Convert a message with tags into a map of their key-value pairs. ### test_init/0 * ### `test_init() -> any()` --- END OF FILE: docs/resources/source-code/dev_json_iface.md --- --- START OF FILE: docs/resources/source-code/dev_local_name.md --- # [Module dev_local_name.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_local_name.erl) A device for registering and looking up local names. ## Description ## This device uses the node message to store a local cache of its known names, and the typical non-volatile storage of the node message to store the names long-term. ## Function Index ##
default_lookup/4*Handle all other requests by delegating to the lookup function.
direct_register/2Register a name without checking if the caller is an operator.
find_names/1*Returns a message containing all known names.
generate_test_opts/0*
http_test/0*
info/1Export only the lookup and register functions.
load_names/1*Loads all known names from the cache and returns the new node message with those names loaded into it.
lookup/3Takes a key argument and returns the value of the name, if it exists.
lookup_opts_name_test/0*
no_names_test/0*
register/3Takes a key and value argument and registers the name.
register_test/0*
unauthorized_test/0*
update_names/2*Updates the node message with the new names.
## Function Details ## ### default_lookup/4 * ### `default_lookup(Key, X2, Req, Opts) -> any()` Handle all other requests by delegating to the lookup function. ### direct_register/2 ### `direct_register(Req, Opts) -> any()` Register a name without checking if the caller is an operator. Exported for use by other devices, but not publicly available. ### find_names/1 * ### `find_names(Opts) -> any()` Returns a message containing all known names. ### generate_test_opts/0 * ### `generate_test_opts() -> any()` ### http_test/0 * ### `http_test() -> any()` ### info/1 ### `info(Opts) -> any()` Export only the `lookup` and `register` functions. ### load_names/1 * ### `load_names(Opts) -> any()` Loads all known names from the cache and returns the new `node message` with those names loaded into it. ### lookup/3 ### `lookup(X1, Req, Opts) -> any()` Takes a `key` argument and returns the value of the name, if it exists. ### lookup_opts_name_test/0 * ### `lookup_opts_name_test() -> any()` ### no_names_test/0 * ### `no_names_test() -> any()` ### register/3 ### `register(X1, Req, Opts) -> any()` Takes a `key` and `value` argument and registers the name. The caller must be the node operator in order to register a name. ### register_test/0 * ### `register_test() -> any()` ### unauthorized_test/0 * ### `unauthorized_test() -> any()` ### update_names/2 * ### `update_names(LocalNames, Opts) -> any()` Updates the node message with the new names. Further HTTP requests will use this new message, removing the need to look up the names from non-volatile storage. --- END OF FILE: docs/resources/source-code/dev_local_name.md --- --- START OF FILE: docs/resources/source-code/dev_lookup.md --- # [Module dev_lookup.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_lookup.erl) A device that looks up an ID from a local store and returns it, honoring the `accept` key to return the correct format. ## Function Index ##
aos2_message_lookup_test/0*
binary_lookup_test/0*
http_lookup_test/0*
message_lookup_test/0*
read/3Fetch a resource from the cache using "target" ID extracted from the message.
## Function Details ## ### aos2_message_lookup_test/0 * ### `aos2_message_lookup_test() -> any()` ### binary_lookup_test/0 * ### `binary_lookup_test() -> any()` ### http_lookup_test/0 * ### `http_lookup_test() -> any()` ### message_lookup_test/0 * ### `message_lookup_test() -> any()` ### read/3 ### `read(M1, M2, Opts) -> any()` Fetch a resource from the cache using "target" ID extracted from the message --- END OF FILE: docs/resources/source-code/dev_lookup.md --- --- START OF FILE: docs/resources/source-code/dev_lua_lib.md --- # [Module dev_lua_lib.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_lua_lib.erl) A module for providing AO library functions to the Lua environment. ## Description ## This module contains the implementation of the functions, each by the name that should be used in the `ao` table in the Lua environment. Every export is imported into the Lua environment. Each function adheres closely to the Luerl calling convention, adding the appropriate node message as a third argument: fun(Args, State, NodeMsg) -> {ResultTerms, NewState} As Lua allows for multiple return values, each function returns a list of terms to grant to the caller. Matching the tuple convention used by AO-Core, the first term is typically the status, and the second term is the result. ## Function Index ##
convert_as/1*Converts any as terms from Lua to their HyperBEAM equivalents.
event/3Allows Lua scripts to signal events using the HyperBEAM hosts internal event system.
get/3A wrapper for hb_ao's get functionality.
install/3Install the library into the given Lua environment.
resolve/3A wrapper function for performing AO-Core resolutions.
return/3*Helper function for returning a result from a Lua function.
set/3Wrapper for hb_ao's set functionality.
## Function Details ## ### convert_as/1 * ### `convert_as(Other) -> any()` Converts any `as` terms from Lua to their HyperBEAM equivalents. ### event/3 ### `event(X1, ExecState, Opts) -> any()` Allows Lua scripts to signal events using the HyperBEAM hosts internal event system. ### get/3 ### `get(X1, ExecState, ExecOpts) -> any()` A wrapper for `hb_ao`'s `get` functionality. ### install/3 ### `install(Base, State, Opts) -> any()` Install the library into the given Lua environment. ### resolve/3 ### `resolve(Msgs, ExecState, ExecOpts) -> any()` A wrapper function for performing AO-Core resolutions. Offers both the single-message (using `hb_singleton:from/1` to parse) and multiple-message (using `hb_ao:resolve_many/2`) variants. ### return/3 * ### `return(Result, ExecState, Opts) -> any()` Helper function for returning a result from a Lua function. ### set/3 ### `set(X1, ExecState, ExecOpts) -> any()` Wrapper for `hb_ao`'s `set` functionality. --- END OF FILE: docs/resources/source-code/dev_lua_lib.md --- --- START OF FILE: docs/resources/source-code/dev_lua_test_ledgers.md --- # [Module dev_lua_test_ledgers.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_lua_test_ledgers.erl) ## Function Index ##
apply_names/3*Apply a map of environment names to elements in either a map or list.
balance/3*Retreive a single balance from the ledger.
balance_total/3*Get the total balance for an ID across all ledgers in a set.
balances/2*Get the balances of a ledger.
balances/3*
comma_separated_scheduler_list_test/0*Ensure that the hyper-token.lua script can parse comma-separated IDs in the scheduler field of a message.
do_apply_names/3*
ledger/2*Generate a Lua process definition message.
ledger/3*
ledgers/2*Get the local expectation of a ledger's balances with peer ledgers.
lua_script/1*Generate a Lua script key from a file or list of files.
map/2*Generate a complete overview of the test environment's balances and ledgers.
map/3*
multischeduler/0*
multischeduler_test_disabled/0*Verify that sub-ledgers can request and enforce multiple scheduler commitments.
normalize_env/1*Normalize a set of processes, representing ledgers in a test environment, to a canonical form: A map of ID => Proc.
normalize_without_root/2*Return the normalized environment without the root ledger.
register/3*Request that a peer register with a without sub-ledger.
subledger/2*Generate a test sub-ledger process definition message.
subledger/3*
subledger_deposit/0*
subledger_deposit_test_/0*Verify that a user can deposit tokens into a sub-ledger.
subledger_registration_test_disabled/0*Verify that peer ledgers on the same token are able to register mutually to establish a peer-to-peer connection.
subledger_supply/3*Calculate the supply of tokens in all sub-ledgers, from the balances of the root ledger.
subledger_to_subledger/0*
subledger_to_subledger_test_/0*Verify that registered sub-ledgers are able to send tokens to each other without the need for messages on the root ledger.
subledger_transfer/0*
subledger_transfer_test_/0*Simulate inter-ledger payments between users on a single sub-ledger: 1.
supply/2*Get the supply of a ledger, either now or initial.
supply/3*
test_opts/0*Create a node message for the test that avoids looking up unknown recipients via remote stores.
transfer/0*
transfer/5*Generate a test transfer message.
transfer/6*
transfer_test_/0*Test the transfer function.
transfer_unauthorized/0*
transfer_unauthorized_test_/0*User's must not be able to send tokens they do not own.
unregistered_peer_transfer/0*
unregistered_peer_transfer_test_/0*Verify that a ledger can send tokens to a peer ledger that is not registered with it yet.
user_supply/3*Calculate the supply of tokens held by users on a ledger, excluding those held in sub-ledgers.
verify_net/3*Execute all invariant checks for a pair of root ledger and sub-ledgers.
verify_net_peer_balances/2*Verify the consistency of all expected ledger balances with their peer ledgers and the actual balances held.
verify_net_supply/3*Verify that the sum of all spendable balances held by ledgers in a test network is equal to the initial supply of tokens.
verify_peer_balances/3*Verify that a ledger's expectation of its balances with peer ledgers is consistent with the actual balances held.
verify_root_supply/2*Verify that the initial supply of tokens on the root ledger is the same as the current supply.
## Function Details ## ### apply_names/3 * ### `apply_names(Map, EnvNames, Opts) -> any()` Apply a map of environment names to elements in either a map or list. Expects a map of `ID or ProcMsg or Wallet => Name` as the `EnvNames` argument, and a potentially deep map or list of elements to apply the names to. ### balance/3 * ### `balance(ProcMsg, User, Opts) -> any()` Retreive a single balance from the ledger. ### balance_total/3 * ### `balance_total(Procs, ID, Opts) -> any()` Get the total balance for an ID across all ledgers in a set. ### balances/2 * ### `balances(ProcMsg, Opts) -> any()` Get the balances of a ledger. ### balances/3 * ### `balances(Mode, ProcMsg, Opts) -> any()` ### comma_separated_scheduler_list_test/0 * ### `comma_separated_scheduler_list_test() -> any()` Ensure that the `hyper-token.lua` script can parse comma-separated IDs in the `scheduler` field of a message. ### do_apply_names/3 * ### `do_apply_names(Map, EnvNames, Opts) -> any()` ### ledger/2 * ### `ledger(Script, Opts) -> any()` Generate a Lua process definition message. ### ledger/3 * ### `ledger(Script, Extra, Opts) -> any()` ### ledgers/2 * ### `ledgers(ProcMsg, Opts) -> any()` Get the local expectation of a ledger's balances with peer ledgers. ### lua_script/1 * ### `lua_script(Files) -> any()` Generate a Lua `script` key from a file or list of files. ### map/2 * ### `map(Procs, Opts) -> any()` Generate a complete overview of the test environment's balances and ledgers. Optionally, a map of environment names can be provided to make the output more readable. ### map/3 * ### `map(Procs, EnvNames, Opts) -> any()` ### multischeduler/0 * ### `multischeduler() -> any()` ### multischeduler_test_disabled/0 * ### `multischeduler_test_disabled() -> any()` Verify that sub-ledgers can request and enforce multiple scheduler commitments. `hyper-token` always validates that peer `base` processes (the uncommitted process ID without its `scheduler` and `authority` fields) match. It allows us to specify additional constraints on the `scheduler` and `authority` fields while matching against the local ledger's base process message. This test validates the correctness of these constraints. The grammar supported by `hyper-token.lua` allows for the following, where `X = scheduler | authority`: - `X`: A list of `X`s that must (by default) be present in the peer ledger's `X` field. - `X-match`: A count of the number of `X`s that must be present in the peer ledger's `X` field. - `X-required`: A list of `X`s that always must be present in the peer ledger's `X` field. ### normalize_env/1 * ### `normalize_env(Procs) -> any()` Normalize a set of processes, representing ledgers in a test environment, to a canonical form: A map of `ID => Proc`. ### normalize_without_root/2 * ### `normalize_without_root(RootProc, Procs) -> any()` Return the normalized environment without the root ledger. ### register/3 * ### `register(ProcMsg, Peer, Opts) -> any()` Request that a peer register with a without sub-ledger. ### subledger/2 * ### `subledger(Root, Opts) -> any()` Generate a test sub-ledger process definition message. ### subledger/3 * ### `subledger(Root, Extra, Opts) -> any()` ### subledger_deposit/0 * ### `subledger_deposit() -> any()` ### subledger_deposit_test_/0 * ### `subledger_deposit_test_() -> any()` Verify that a user can deposit tokens into a sub-ledger. ### subledger_registration_test_disabled/0 * ### `subledger_registration_test_disabled() -> any()` Verify that peer ledgers on the same token are able to register mutually to establish a peer-to-peer connection. Disabled as explicit peer registration is not required for `hyper-token.lua` to function. ### subledger_supply/3 * ### `subledger_supply(RootProc, AllProcs, Opts) -> any()` Calculate the supply of tokens in all sub-ledgers, from the balances of the root ledger. ### subledger_to_subledger/0 * ### `subledger_to_subledger() -> any()` ### subledger_to_subledger_test_/0 * ### `subledger_to_subledger_test_() -> any()` Verify that registered sub-ledgers are able to send tokens to each other without the need for messages on the root ledger. ### subledger_transfer/0 * ### `subledger_transfer() -> any()` ### subledger_transfer_test_/0 * ### `subledger_transfer_test_() -> any()` Simulate inter-ledger payments between users on a single sub-ledger: 1. Alice has tokens on the root ledger. 2. Alice sends tokens to the sub-ledger from the root ledger. 3. Alice sends tokens to Bob on the sub-ledger. 4. Bob sends tokens to Alice on the root ledger. ### supply/2 * ### `supply(ProcMsg, Opts) -> any()` Get the supply of a ledger, either `now` or `initial`. ### supply/3 * ### `supply(Mode, ProcMsg, Opts) -> any()` ### test_opts/0 * ### `test_opts() -> any()` Create a node message for the test that avoids looking up unknown recipients via remote stores. This improves test performance. ### transfer/0 * ### `transfer() -> any()` ### transfer/5 * ### `transfer(ProcMsg, Sender, Recipient, Quantity, Opts) -> any()` Generate a test transfer message. ### transfer/6 * ### `transfer(ProcMsg, Sender, Recipient, Quantity, Route, Opts) -> any()` ### transfer_test_/0 * ### `transfer_test_() -> any()` Test the `transfer` function. 1. Alice has 100 tokens on a root ledger. 2. Alice sends 1 token to Bob. 3. Alice has 99 tokens, and Bob has 1 token. ### transfer_unauthorized/0 * ### `transfer_unauthorized() -> any()` ### transfer_unauthorized_test_/0 * ### `transfer_unauthorized_test_() -> any()` User's must not be able to send tokens they do not own. We test three cases: 1. Transferring a token when the sender has no tokens. 2. Transferring a token when the sender has less tokens than the amount being transferred. 3. Transferring a binary-encoded amount of tokens that exceed the quantity of tokens the sender has available. ### unregistered_peer_transfer/0 * ### `unregistered_peer_transfer() -> any()` ### unregistered_peer_transfer_test_/0 * ### `unregistered_peer_transfer_test_() -> any()` Verify that a ledger can send tokens to a peer ledger that is not registered with it yet. Each peer ledger must have precisely the same process base message, granting transitive security properties: If a peer trusts its own compute and assignment mechanism, then it can trust messages from exact duplicates of itself. In order for this to be safe, the peer ledger network's base process message must implement sufficicient rollback protections and compute correctness guarantees. ### user_supply/3 * ### `user_supply(Proc, AllProcs, Opts) -> any()` Calculate the supply of tokens held by users on a ledger, excluding those held in sub-ledgers. ### verify_net/3 * ### `verify_net(RootProc, AllProcs, Opts) -> any()` Execute all invariant checks for a pair of root ledger and sub-ledgers. ### verify_net_peer_balances/2 * ### `verify_net_peer_balances(AllProcs, Opts) -> any()` Verify the consistency of all expected ledger balances with their peer ledgers and the actual balances held. ### verify_net_supply/3 * ### `verify_net_supply(RootProc, AllProcs, Opts) -> any()` Verify that the sum of all spendable balances held by ledgers in a test network is equal to the initial supply of tokens. ### verify_peer_balances/3 * ### `verify_peer_balances(ValidateProc, AllProcs, Opts) -> any()` Verify that a ledger's expectation of its balances with peer ledgers is consistent with the actual balances held. ### verify_root_supply/2 * ### `verify_root_supply(RootProc, Opts) -> any()` Verify that the initial supply of tokens on the root ledger is the same as the current supply. This invariant will not hold for sub-ledgers, as they 'mint' tokens in their local supply when they receive them from other ledgers. --- END OF FILE: docs/resources/source-code/dev_lua_test_ledgers.md --- --- START OF FILE: docs/resources/source-code/dev_lua_test.md --- # [Module dev_lua_test.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_lua_test.erl) ## Function Index ##
exec_test/2*Generate an EUnit test for a given function.
exec_test_/0*Main entrypoint for Lua tests.
new_state/1*Create a new Lua environment for a given script.
parse_spec/1Parse a string representation of test descriptions received from the command line via the LUA_TESTS environment variable.
suite/2*Generate an EUnit test suite for a given Lua script.
terminates_with/2*Check if a string terminates with a given suffix.
## Function Details ## ### exec_test/2 * ### `exec_test(State, Function) -> any()` Generate an EUnit test for a given function. ### exec_test_/0 * ### `exec_test_() -> any()` Main entrypoint for Lua tests. ### new_state/1 * ### `new_state(File) -> any()` Create a new Lua environment for a given script. ### parse_spec/1 ### `parse_spec(Str) -> any()` Parse a string representation of test descriptions received from the command line via the `LUA_TESTS` environment variable. Supported syntax in loose BNF/RegEx: Definitions := (ModDef,)+ ModDef := ModName(TestDefs)? ModName := ModuleInLUA_SCRIPTS|(FileName[.lua])? TestDefs := (:TestDef)+ TestDef := TestName File names ending in `.lua` are assumed to be relative paths from the current working directory. Module names lacking the `.lua` extension are assumed to be modules found in the `LUA_SCRIPTS` environment variable (defaulting to `scripts/`). For example, to run a single test one could call the following: LUA_TESTS=~/src/LuaScripts/test.yourTest rebar3 lua-tests To specify that one would like to run all of the tests in the `scripts/test.lua` file and two tests from the `scripts/test2.lua` file, the user could provide the following test definition: LUA_TESTS="test,scripts/test2.userTest1|userTest2" rebar3 lua-tests ### suite/2 * ### `suite(File, Funcs) -> any()` Generate an EUnit test suite for a given Lua script. If the `Funcs` is the atom `tests` we find all of the global functions in the script, then filter for those ending in `_test` in a similar fashion to Eunit. ### terminates_with/2 * ### `terminates_with(String, Suffix) -> any()` Check if a string terminates with a given suffix. --- END OF FILE: docs/resources/source-code/dev_lua_test.md --- --- START OF FILE: docs/resources/source-code/dev_lua.md --- # [Module dev_lua.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_lua.erl) A device that calls a Lua module upon a request and returns the result. ## Function Index ##
ao_core_resolution_from_lua_test/0*Run an AO-Core resolution from the Lua environment.
ao_core_sandbox_test/0*Run an AO-Core resolution from the Lua environment.
aos_authority_not_trusted_test/0*
aos_process_benchmark_test_/0*Benchmark the performance of Lua executions.
compute/4*Call the Lua script with the given arguments.
decode/2Decode a Lua result into a HyperBEAM structured@1.0 message.
decode_params/3*Decode a list of Lua references, as found in a stack trace, into a list of Erlang terms.
decode_stacktrace/3*Parse a Lua stack trace into a list of messages.
decode_stacktrace/4*
direct_benchmark_test/0*Benchmark the performance of Lua executions.
encode/2Encode a HyperBEAM structured@1.0 message into a Lua term.
ensure_initialized/3*Initialize the Lua VM if it is not already initialized.
error_response_test/0*
find_modules/2*Find the script in the base message, either by ID or by string.
functions/3Return a list of all functions in the Lua environment.
generate_lua_process/2*Generate a Lua process message.
generate_stack/1*Generate a stack message for the Lua process.
generate_test_message/2*Generate a test message for a Lua process.
info/1All keys that are not directly available in the base message are resolved by calling the Lua function in the module of the same name.
init/3Initialize the device state, loading the script into memory if it is a reference.
initialize/3*Initialize a new Lua state with a given base message and module.
invoke_aos_test/0*
invoke_non_compute_key_test/0*Call a non-compute key on a Lua device message and ensure that the function of the same name in the script is called.
load_modules/2*Load a list of modules for installation into the Lua VM.
load_modules/3*
load_modules_by_id/0*
load_modules_by_id_test_/0*
lua_http_hook_test/0*Use a Lua module as a hook on the HTTP server via ~meta@1.0.
multiple_modules_test/0*
normalize/3Restore the Lua state from a snapshot, if it exists.
process_response/3*Process a response to a Luerl invocation.
pure_lua_process_benchmark/0*
pure_lua_process_benchmark_test_/0*
pure_lua_process_test/0*Call a process whose execution-device is set to lua@5.3a.
sandbox/3*Sandbox (render inoperable) a set of Lua functions.
sandboxed_failure_test/0*
simple_invocation_test/0*
snapshot/3Snapshot the Lua state from a live computation.
## Function Details ## ### ao_core_resolution_from_lua_test/0 * ### `ao_core_resolution_from_lua_test() -> any()` Run an AO-Core resolution from the Lua environment. ### ao_core_sandbox_test/0 * ### `ao_core_sandbox_test() -> any()` Run an AO-Core resolution from the Lua environment. ### aos_authority_not_trusted_test/0 * ### `aos_authority_not_trusted_test() -> any()` ### aos_process_benchmark_test_/0 * ### `aos_process_benchmark_test_() -> any()` Benchmark the performance of Lua executions. ### compute/4 * ### `compute(Key, RawBase, Req, Opts) -> any()` Call the Lua script with the given arguments. ### decode/2 ### `decode(EncMsg, Opts) -> any()` Decode a Lua result into a HyperBEAM `structured@1.0` message. ### decode_params/3 * ### `decode_params(Rest, State, Opts) -> any()` Decode a list of Lua references, as found in a stack trace, into a list of Erlang terms. ### decode_stacktrace/3 * ### `decode_stacktrace(StackTrace, State0, Opts) -> any()` Parse a Lua stack trace into a list of messages. ### decode_stacktrace/4 * ### `decode_stacktrace(Rest, State, Acc, Opts) -> any()` ### direct_benchmark_test/0 * ### `direct_benchmark_test() -> any()` Benchmark the performance of Lua executions. ### encode/2 ### `encode(Map, Opts) -> any()` Encode a HyperBEAM `structured@1.0` message into a Lua term. ### ensure_initialized/3 * ### `ensure_initialized(Base, Req, Opts) -> any()` Initialize the Lua VM if it is not already initialized. Optionally takes the script as a Binary string. If not provided, the module will be loaded from the base message. ### error_response_test/0 * ### `error_response_test() -> any()` ### find_modules/2 * ### `find_modules(Base, Opts) -> any()` Find the script in the base message, either by ID or by string. ### functions/3 ### `functions(Base, Req, Opts) -> any()` Return a list of all functions in the Lua environment. ### generate_lua_process/2 * ### `generate_lua_process(File, Opts) -> any()` Generate a Lua process message. ### generate_stack/1 * ### `generate_stack(File) -> any()` Generate a stack message for the Lua process. ### generate_test_message/2 * ### `generate_test_message(Process, Opts) -> any()` Generate a test message for a Lua process. ### info/1 ### `info(Base) -> any()` All keys that are not directly available in the base message are resolved by calling the Lua function in the module of the same name. Additionally, we exclude the `keys`, `set`, `encode` and `decode` functions which are `message@1.0` core functions, and Lua public utility functions. ### init/3 ### `init(Base, Req, Opts) -> any()` Initialize the device state, loading the script into memory if it is a reference. ### initialize/3 * ### `initialize(Base, Modules, Opts) -> any()` Initialize a new Lua state with a given base message and module. ### invoke_aos_test/0 * ### `invoke_aos_test() -> any()` ### invoke_non_compute_key_test/0 * ### `invoke_non_compute_key_test() -> any()` Call a non-compute key on a Lua device message and ensure that the function of the same name in the script is called. ### load_modules/2 * ### `load_modules(Modules, Opts) -> any()` Load a list of modules for installation into the Lua VM. ### load_modules/3 * ### `load_modules(Rest, Opts, Acc) -> any()` ### load_modules_by_id/0 * ### `load_modules_by_id() -> any()` ### load_modules_by_id_test_/0 * ### `load_modules_by_id_test_() -> any()` ### lua_http_hook_test/0 * ### `lua_http_hook_test() -> any()` Use a Lua module as a hook on the HTTP server via `~meta@1.0`. ### multiple_modules_test/0 * ### `multiple_modules_test() -> any()` ### normalize/3 ### `normalize(Base, Req, RawOpts) -> any()` Restore the Lua state from a snapshot, if it exists. ### process_response/3 * ### `process_response(X1, Priv, Opts) -> any()` Process a response to a Luerl invocation. Returns the typical AO-Core HyperBEAM response format. ### pure_lua_process_benchmark/0 * ### `pure_lua_process_benchmark() -> any()` ### pure_lua_process_benchmark_test_/0 * ### `pure_lua_process_benchmark_test_() -> any()` ### pure_lua_process_test/0 * ### `pure_lua_process_test() -> any()` Call a process whose `execution-device` is set to `lua@5.3a`. ### sandbox/3 * ### `sandbox(State, Map, Opts) -> any()` Sandbox (render inoperable) a set of Lua functions. Each function is referred to as if it is a path in AO-Core, with its value being what to return to the caller. For example, 'os.exit' would be referred to as referred to as `os/exit`. If preferred, a list rather than a map may be provided, in which case the functions all return `sandboxed`. ### sandboxed_failure_test/0 * ### `sandboxed_failure_test() -> any()` ### simple_invocation_test/0 * ### `simple_invocation_test() -> any()` ### snapshot/3 ### `snapshot(Base, Req, Opts) -> any()` Snapshot the Lua state from a live computation. Normalizes its `priv` state element, then serializes the state to a binary. --- END OF FILE: docs/resources/source-code/dev_lua.md --- --- START OF FILE: docs/resources/source-code/dev_manifest.md --- # [Module dev_manifest.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_manifest.erl) An Arweave path manifest resolution device. ## Description ## Follows the v1 schema: https://specs.ar.io/?tx=lXLd0OPwo-dJLB_Amz5jgIeDhiOkjXuM3-r0H_aiNj0 ## Function Index ##
info/0Use the route/4 function as the handler for all requests, aside from keys and set, which are handled by the default resolver.
manifest/3*Find and deserialize a manifest from the given base.
resolve_test/0*
route/4*Route a request to the associated data via its manifest.
## Function Details ## ### info/0 ### `info() -> any()` Use the `route/4` function as the handler for all requests, aside from `keys` and `set`, which are handled by the default resolver. ### manifest/3 * ### `manifest(Base, Req, Opts) -> any()` Find and deserialize a manifest from the given base. ### resolve_test/0 * ### `resolve_test() -> any()` ### route/4 * ### `route(Key, M1, M2, Opts) -> any()` Route a request to the associated data via its manifest. --- END OF FILE: docs/resources/source-code/dev_manifest.md --- --- START OF FILE: docs/resources/source-code/dev_message.md --- # [Module dev_message.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_message.erl) The identity device: For non-reserved keys, it simply returns a key from the message as it is found in the message's underlying Erlang map. ## Description ## Private keys (`priv[.*]`) are not included. Reserved keys are: `id`, `commitments`, `committers`, `keys`, `path`, `set`, `remove`, `get`, and `verify`. Their function comments describe the behaviour of the device when these keys are set. ## Function Index ##
calculate_id/3*
cannot_get_private_keys_test/0*
case_insensitive_get/3*Key matching should be case insensitive, following RFC-9110, so we implement a case-insensitive key lookup rather than delegating to hb_maps:get/2.
case_insensitive_get_test/0*
commit/3Commit to a message, using the commitment-device key to specify the device that should be used to commit to the message.
commitment_ids_from_committers/3*Returns a list of commitment IDs in a commitments map that are relevant for a list of given committer addresses.
commitment_ids_from_request/3*Implements a standardized form of specifying commitment IDs for a message request.
committed/3Return the list of committed keys from a message.
committers/1Return the committers of a message that are present in the given request.
committers/2
committers/3
deep_unset_test/0*
ensure_commitments_loaded/2*Ensure that the commitments submessage of a base message is fully loaded into local memory.
get/3Return the value associated with the key as it exists in the message's underlying Erlang map.
get/4
get_keys_mod_test/0*
id/1Return the ID of a message, using the committers list if it exists.
id/2
id/3
id_device/2*Locate the ID device of a message.
index/3Generate an index page for a message, in the event that the body and content-type of a message returned to the client are both empty.
info/0Return the info for the identity device.
is_private_mod_test/0*
key_from_device_test/0*
keys/1Get the public keys of a message.
keys/2
keys_from_device_test/0*
private_keys_are_filtered_test/0*
remove/2Remove a key or keys from a message.
remove/3
remove_test/0*
set/3Deep merge keys in a message.
set_conflicting_keys_test/0*
set_ignore_undefined_test/0*
set_path/3Special case of set/3 for setting the path key.
unset_with_set_test/0*
verify/3Verify a message.
verify_commitment/3*Execute a function for a single commitment in the context of its parent message.
verify_test/0*
with_relevant_commitments/3*Return a message with only the relevant commitments for a given request.
## Function Details ## ### calculate_id/3 * ### `calculate_id(Base, Req, NodeOpts) -> any()` ### cannot_get_private_keys_test/0 * ### `cannot_get_private_keys_test() -> any()` ### case_insensitive_get/3 * ### `case_insensitive_get(Key, Msg, Opts) -> any()` Key matching should be case insensitive, following RFC-9110, so we implement a case-insensitive key lookup rather than delegating to `hb_maps:get/2`. Encode the key to a binary if it is not already. ### case_insensitive_get_test/0 * ### `case_insensitive_get_test() -> any()` ### commit/3 ### `commit(Self, Req, Opts) -> any()` Commit to a message, using the `commitment-device` key to specify the device that should be used to commit to the message. If the key is not set, the default device (`httpsig@1.0`) is used. ### commitment_ids_from_committers/3 * ### `commitment_ids_from_committers(CommitterAddrs, Commitments, Opts) -> any()` Returns a list of commitment IDs in a commitments map that are relevant for a list of given committer addresses. ### commitment_ids_from_request/3 * ### `commitment_ids_from_request(Base, Req, Opts) -> any()` Implements a standardized form of specifying commitment IDs for a message request. The caller may specify a list of committers (by address) or a list of commitment IDs directly. They may specify both, in which case the returned list will be the union of the two lists. In each case, they may specify `all` or `none` for each group. If no specifiers are provided, the default is `all` for commitments -- also implying `all` for committers. ### committed/3 ### `committed(Self, Req, Opts) -> any()` Return the list of committed keys from a message. ### committers/1 ### `committers(Base) -> any()` Return the committers of a message that are present in the given request. ### committers/2 ### `committers(Base, Req) -> any()` ### committers/3 ### `committers(X1, X2, NodeOpts) -> any()` ### deep_unset_test/0 * ### `deep_unset_test() -> any()` ### ensure_commitments_loaded/2 * ### `ensure_commitments_loaded(NonRelevant, Opts) -> any()` Ensure that the `commitments` submessage of a base message is fully loaded into local memory. ### get/3 ### `get(Key, Msg, Opts) -> any()` Return the value associated with the key as it exists in the message's underlying Erlang map. First check the public keys, then check case- insensitively if the key is a binary. ### get/4 ### `get(Key, Msg, Msg2, Opts) -> any()` ### get_keys_mod_test/0 * ### `get_keys_mod_test() -> any()` ### id/1 ### `id(Base) -> any()` Return the ID of a message, using the `committers` list if it exists. If the `committers` key is `all`, return the ID including all known commitments -- `none` yields the ID without any commitments. If the `committers` key is a list/map, return the ID including only the specified commitments. The `id-device` key in the message can be used to specify the device that should be used to calculate the ID. If it is not set, the default device (`httpsig@1.0`) is used. Note: This function _does not_ use AO-Core's `get/3` function, as it as it would require significant computation. We may want to change this if/when non-map message structures are created. ### id/2 ### `id(Base, Req) -> any()` ### id/3 ### `id(Base, Req, NodeOpts) -> any()` ### id_device/2 * ### `id_device(X1, Opts) -> any()` Locate the ID device of a message. The ID device is determined the `device` set in _all_ of the commitments. If no commitments are present, the default device (`httpsig@1.0`) is used. ### index/3 ### `index(Msg, Req, Opts) -> any()` Generate an index page for a message, in the event that the `body` and `content-type` of a message returned to the client are both empty. We do this as follows: 1. Find the `default_index` key of the node message. If it is a binary, it is assumed to be the name of a device, and we execute the resolution `as` that ID. 2. Merge the base message with the default index message, favoring the default index message's keys over those in the base message, unless the default was a device name. 3. Execute the `default_index_path` (base: `index`) upon the message, giving the rest of the request unchanged. ### info/0 ### `info() -> any()` Return the info for the identity device. ### is_private_mod_test/0 * ### `is_private_mod_test() -> any()` ### key_from_device_test/0 * ### `key_from_device_test() -> any()` ### keys/1 ### `keys(Msg) -> any()` Get the public keys of a message. ### keys/2 ### `keys(Msg, Opts) -> any()` ### keys_from_device_test/0 * ### `keys_from_device_test() -> any()` ### private_keys_are_filtered_test/0 * ### `private_keys_are_filtered_test() -> any()` ### remove/2 ### `remove(Message1, Key) -> any()` Remove a key or keys from a message. ### remove/3 ### `remove(Message1, X2, Opts) -> any()` ### remove_test/0 * ### `remove_test() -> any()` ### set/3 ### `set(Message1, NewValuesMsg, Opts) -> any()` Deep merge keys in a message. Takes a map of key-value pairs and sets them in the message, overwriting any existing values. ### set_conflicting_keys_test/0 * ### `set_conflicting_keys_test() -> any()` ### set_ignore_undefined_test/0 * ### `set_ignore_undefined_test() -> any()` ### set_path/3 ### `set_path(Message1, X2, Opts) -> any()` Special case of `set/3` for setting the `path` key. This cannot be set using the normal `set` function, as the `path` is a reserved key, necessary for AO-Core to know the key to evaluate in requests. ### unset_with_set_test/0 * ### `unset_with_set_test() -> any()` ### verify/3 ### `verify(Self, Req, Opts) -> any()` Verify a message. By default, all commitments are verified. The `committers` key in the request can be used to specify that only the commitments from specific committers should be verified. Similarly, specific commitments can be specified using the `commitments` key. ### verify_commitment/3 * ### `verify_commitment(Base, Commitment, Opts) -> any()` Execute a function for a single commitment in the context of its parent message. Note: Assumes that the `commitments` key has already been removed from the message if applicable. ### verify_test/0 * ### `verify_test() -> any()` ### with_relevant_commitments/3 * ### `with_relevant_commitments(Base, Req, Opts) -> any()` Return a message with only the relevant commitments for a given request. See `commitment_ids_from_request/3` for more information on the request format. --- END OF FILE: docs/resources/source-code/dev_message.md --- --- START OF FILE: docs/resources/source-code/dev_meta.md --- # [Module dev_meta.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_meta.erl) The hyperbeam meta device, which is the default entry point for all messages processed by the machine. ## Description ## This device executes a AO-Core singleton request, after first applying the node's pre-processor, if set. The pre-processor can halt the request by returning an error, or return a modified version if it deems necessary -- the result of the pre-processor is used as the request for the AO-Core resolver. Additionally, a post-processor can be set, which is executed after the AO-Core resolver has returned a result. ## Function Index ##
add_dynamic_keys/1*Add dynamic keys to the node message.
add_identity_addresses/1*
adopt_node_message/2Attempt to adopt changes to a node message.
authorized_set_node_msg_succeeds_test/0*Test that we can set the node message if the request is signed by the owner of the node.
build/3Emits the version number and commit hash of the HyperBEAM node source, if available.
buildinfo_test/0*Test that version information is available and returned correctly.
claim_node_test/0*Test that we can claim the node correctly and set the node message after.
config_test/0*Test that we can get the node message.
embed_status/2*Wrap the result of a device call in a status.
filter_node_msg/2*Remove items from the node message that are not encodable into a message.
halt_request_test/0*Test that we can halt a request if the hook returns an error.
handle/2Normalize and route messages downstream based on their path.
handle_initialize/2*
handle_resolve/3*Handle an AO-Core request, which is a list of messages.
info/1Ensure that the helper function adopt_node_message/2 is not exported.
info/3Get/set the node message.
is/2Check if the request in question is signed by a given role on the node.
is/3
is_operator/2Utility function for determining if a request is from the operator of the node.
maybe_sign/2*Sign the result of a device call if the node is configured to do so.
message_to_status/2*Get the HTTP status code from a transaction (if it exists).
modify_request_test/0*Test that a hook can modify a request.
permanent_node_message_test/0*Test that a permanent node message cannot be changed.
priv_inaccessible_test/0*Test that we can't get the node message if the requested key is private.
request_response_hooks_test/0*
resolve_hook/4*Execute a hook from the node message upon the user's request.
status_code/2*Calculate the appropriate HTTP status code for an AO-Core result.
unauthorized_set_node_msg_fails_test/0*Test that we can't set the node message if the request is not signed by the owner of the node.
uninitialized_node_test/0*Test that an uninitialized node will not run computation.
update_node_message/2*Validate that the request is signed by the operator of the node, then allow them to update the node message.
## Function Details ## ### add_dynamic_keys/1 * ### `add_dynamic_keys(NodeMsg) -> any()` Add dynamic keys to the node message. ### add_identity_addresses/1 * ### `add_identity_addresses(NodeMsg) -> any()` ### adopt_node_message/2 ### `adopt_node_message(Request, NodeMsg) -> any()` Attempt to adopt changes to a node message. ### authorized_set_node_msg_succeeds_test/0 * ### `authorized_set_node_msg_succeeds_test() -> any()` Test that we can set the node message if the request is signed by the owner of the node. ### build/3 ### `build(X1, X2, NodeMsg) -> any()` Emits the version number and commit hash of the HyperBEAM node source, if available. We include the short hash separately, as the length of this hash may change in the future, depending on the git version/config used to build the node. Subsequently, rather than embedding the `git-short-hash-length`, for the avoidance of doubt, we include the short hash separately, as well as its long hash. ### buildinfo_test/0 * ### `buildinfo_test() -> any()` Test that version information is available and returned correctly. ### claim_node_test/0 * ### `claim_node_test() -> any()` Test that we can claim the node correctly and set the node message after. ### config_test/0 * ### `config_test() -> any()` Test that we can get the node message. ### embed_status/2 * ### `embed_status(X1, NodeMsg) -> any()` Wrap the result of a device call in a status. ### filter_node_msg/2 * ### `filter_node_msg(Msg, NodeMsg) -> any()` Remove items from the node message that are not encodable into a message. ### halt_request_test/0 * ### `halt_request_test() -> any()` Test that we can halt a request if the hook returns an error. ### handle/2 ### `handle(NodeMsg, RawRequest) -> any()` Normalize and route messages downstream based on their path. Messages with a `Meta` key are routed to the `handle_meta/2` function, while all other messages are routed to the `handle_resolve/2` function. ### handle_initialize/2 * ### `handle_initialize(Rest, NodeMsg) -> any()` ### handle_resolve/3 * ### `handle_resolve(Req, Msgs, NodeMsg) -> any()` Handle an AO-Core request, which is a list of messages. We apply the node's pre-processor to the request first, and then resolve the request using the node's AO-Core implementation if its response was `ok`. After execution, we run the node's `response` hook on the result of the request before returning the result it grants back to the user. ### info/1 ### `info(X1) -> any()` Ensure that the helper function `adopt_node_message/2` is not exported. The naming of this method carefully avoids a clash with the exported `info/3` function. We would like the node information to be easily accessible via the `info` endpoint, but AO-Core also uses `info` as the name of the function that grants device information. The device call takes two or fewer arguments, so we are safe to use the name for both purposes in this case, as the user info call will match the three-argument version of the function. If in the future the `request` is added as an argument to AO-Core's internal `info` function, we will need to find a different approach. ### info/3 ### `info(X1, Request, NodeMsg) -> any()` Get/set the node message. If the request is a `POST`, we check that the request is signed by the owner of the node. If not, we return the node message as-is, aside all keys that are private (according to `hb_private`). ### is/2 ### `is(Request, NodeMsg) -> any()` Check if the request in question is signed by a given `role` on the node. The `role` can be one of `operator` or `initiator`. ### is/3 ### `is(X1, Request, NodeMsg) -> any()` ### is_operator/2 ### `is_operator(Request, NodeMsg) -> any()` Utility function for determining if a request is from the `operator` of the node. ### maybe_sign/2 * ### `maybe_sign(Res, NodeMsg) -> any()` Sign the result of a device call if the node is configured to do so. ### message_to_status/2 * ### `message_to_status(Item, NodeMsg) -> any()` Get the HTTP status code from a transaction (if it exists). ### modify_request_test/0 * ### `modify_request_test() -> any()` Test that a hook can modify a request. ### permanent_node_message_test/0 * ### `permanent_node_message_test() -> any()` Test that a permanent node message cannot be changed. ### priv_inaccessible_test/0 * ### `priv_inaccessible_test() -> any()` Test that we can't get the node message if the requested key is private. ### request_response_hooks_test/0 * ### `request_response_hooks_test() -> any()` ### resolve_hook/4 * ### `resolve_hook(HookName, InitiatingRequest, Body, NodeMsg) -> any()` Execute a hook from the node message upon the user's request. The invocation of the hook provides a request of the following form: ``` /path => request | response /request => the original request singleton /body => parsed sequence of messages to process | the execution result ``` ### status_code/2 * ### `status_code(X1, NodeMsg) -> any()` Calculate the appropriate HTTP status code for an AO-Core result. The order of precedence is: 1. The status code from the message. 2. The HTTP representation of the status code. 3. The default status code. ### unauthorized_set_node_msg_fails_test/0 * ### `unauthorized_set_node_msg_fails_test() -> any()` Test that we can't set the node message if the request is not signed by the owner of the node. ### uninitialized_node_test/0 * ### `uninitialized_node_test() -> any()` Test that an uninitialized node will not run computation. ### update_node_message/2 * ### `update_node_message(Request, NodeMsg) -> any()` Validate that the request is signed by the operator of the node, then allow them to update the node message. --- END OF FILE: docs/resources/source-code/dev_meta.md --- --- START OF FILE: docs/resources/source-code/dev_monitor.md --- # [Module dev_monitor.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_monitor.erl) ## Function Index ##
add_monitor/2
end_of_schedule/1
execute/2
init/3
signal/2*
uses/0
## Function Details ## ### add_monitor/2 ### `add_monitor(Mon, State) -> any()` ### end_of_schedule/1 ### `end_of_schedule(State) -> any()` ### execute/2 ### `execute(Message, State) -> any()` ### init/3 ### `init(State, X2, InitState) -> any()` ### signal/2 * ### `signal(State, Signal) -> any()` ### uses/0 ### `uses() -> any()` --- END OF FILE: docs/resources/source-code/dev_monitor.md --- --- START OF FILE: docs/resources/source-code/dev_multipass.md --- # [Module dev_multipass.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_multipass.erl) A device that triggers repass events until a certain counter has been reached. ## Description ## This is useful for certain types of stacks that need various execution passes to be completed in sequence across devices. ## Function Index ##
basic_multipass_test/0*
handle/4*Forward the keys function to the message device, handle all others with deduplication.
info/1
## Function Details ## ### basic_multipass_test/0 * ### `basic_multipass_test() -> any()` ### handle/4 * ### `handle(Key, M1, M2, Opts) -> any()` Forward the keys function to the message device, handle all others with deduplication. We only act on the first pass. ### info/1 ### `info(M1) -> any()` --- END OF FILE: docs/resources/source-code/dev_multipass.md --- --- START OF FILE: docs/resources/source-code/dev_name.md --- # [Module dev_name.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_name.erl) A device for resolving names to their corresponding values, through the use of a `resolver` interface. ## Description ## Each `resolver` is a message that can be given a `key` and returns an associated value. The device will attempt to match the key against each resolver in turn, and return the value of the first resolver that matches. ## Function Index ##
execute_resolver/3*Execute a resolver with the given key and return its value.
info/1Configure the default key to proxy to the resolver/4 function.
load_and_execute_test/0*Test that we can resolve messages from a name loaded with the device.
match_resolver/3*Find the first resolver that matches the key and return its value.
message_lookup_device_resolver/1*
multiple_resolvers_test/0*
no_resolvers_test/0*
resolve/4*Resolve a name to its corresponding value.
single_resolver_test/0*
## Function Details ## ### execute_resolver/3 * ### `execute_resolver(Key, Resolver, Opts) -> any()` Execute a resolver with the given key and return its value. ### info/1 ### `info(X1) -> any()` Configure the `default` key to proxy to the `resolver/4` function. Exclude the `keys` and `set` keys from being processed by this device, as these are needed to modify the base message itself. ### load_and_execute_test/0 * ### `load_and_execute_test() -> any()` Test that we can resolve messages from a name loaded with the device. ### match_resolver/3 * ### `match_resolver(Key, Resolvers, Opts) -> any()` Find the first resolver that matches the key and return its value. ### message_lookup_device_resolver/1 * ### `message_lookup_device_resolver(Msg) -> any()` ### multiple_resolvers_test/0 * ### `multiple_resolvers_test() -> any()` ### no_resolvers_test/0 * ### `no_resolvers_test() -> any()` ### resolve/4 * ### `resolve(Key, X2, Req, Opts) -> any()` Resolve a name to its corresponding value. The name is given by the key called. For example, `GET /~name@1.0/hello&load=false` grants the value of `hello`. If the `load` key is set to `true`, the value is treated as a pointer and its contents is loaded from the cache. For example, `GET /~name@1.0/reference` yields the message at the path specified by the `reference` key. ### single_resolver_test/0 * ### `single_resolver_test() -> any()` --- END OF FILE: docs/resources/source-code/dev_name.md --- --- START OF FILE: docs/resources/source-code/dev_node_process.md --- # [Module dev_node_process.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_node_process.erl) A device that implements the singleton pattern for processes specific to an individual node. ## Description ## This device uses the `local-name@1.0` device to register processes with names locally, persistenting them across reboots. Definitions of singleton processes are expected to be found with their names in the `node_processes` section of the node message. ## Function Index ##
augment_definition/2*Augment the given process definition with the node's address.
generate_test_opts/0*Helper function to generate a test environment and its options.
generate_test_opts/1*
info/1Register a default handler for the device.
lookup/4*Lookup a process by name.
lookup_execute_test/0*Test that a process can be spawned, executed upon, and its result retrieved.
lookup_no_spawn_test/0*
lookup_spawn_test/0*
spawn_register/2*Spawn a new process according to the process definition found in the node message, and register it with the given name.
## Function Details ## ### augment_definition/2 * ### `augment_definition(BaseDef, Opts) -> any()` Augment the given process definition with the node's address. ### generate_test_opts/0 * ### `generate_test_opts() -> any()` Helper function to generate a test environment and its options. ### generate_test_opts/1 * ### `generate_test_opts(Defs) -> any()` ### info/1 ### `info(Opts) -> any()` Register a default handler for the device. Inherits `keys` and `set` from the default device. ### lookup/4 * ### `lookup(Name, Base, Req, Opts) -> any()` Lookup a process by name. ### lookup_execute_test/0 * ### `lookup_execute_test() -> any()` Test that a process can be spawned, executed upon, and its result retrieved. ### lookup_no_spawn_test/0 * ### `lookup_no_spawn_test() -> any()` ### lookup_spawn_test/0 * ### `lookup_spawn_test() -> any()` ### spawn_register/2 * ### `spawn_register(Name, Opts) -> any()` Spawn a new process according to the process definition found in the node message, and register it with the given name. --- END OF FILE: docs/resources/source-code/dev_node_process.md --- --- START OF FILE: docs/resources/source-code/dev_p4.md --- # [Module dev_p4.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_p4.erl) The HyperBEAM core payment ledger. ## Description ## This module allows the operator to specify another device that can act as a pricing mechanism for transactions on the node, as well as orchestrating a payment ledger to calculate whether the node should fulfil services for users. The device requires the following node message settings in order to function: - `p4_pricing-device`: The device that will estimate the cost of a request. - `p4_ledger-device`: The device that will act as a payment ledger. The pricing device should implement the following keys: ``` GET /estimate?type=pre|post&body=[...]&request=RequestMessageGET /price?type=pre|post&body=[...]&request=RequestMessage ``` The `body` key is used to pass either the request or response messages to the device. The `type` key is used to specify whether the inquiry is for a request (pre) or a response (post) object. Requests carry lists of messages that will be executed, while responses carry the results of the execution. The `price` key may return `infinity` if the node will not serve a user under any circumstances. Else, the value returned by the `price` key will be passed to the ledger device as the `amount` key. A ledger device should implement the following keys: ``` POST /credit?message=PaymentMessage&request=RequestMessagePOST /charge?amount=PriceMessage&request=RequestMessageGET /balance?request=RequestMessage ``` The `type` key is optional and defaults to `pre`. If `type` is set to `post`, the charge must be applied to the ledger, whereas the `pre` type is used to check whether the charge would succeed before execution. ## Function Index ##
balance/3Get the balance of a user in the ledger.
faff_test/0*Simple test of p4's capabilities with the faff@1.0 device.
hyper_token_ledger/0*
hyper_token_ledger_test_/0*Ensure that Lua scripts can be used as pricing and ledger devices.
is_chargable_req/2*The node operator may elect to make certain routes non-chargable, using the routes syntax also used to declare routes in router@1.0.
non_chargable_route_test/0*Test that a non-chargable route is not charged for.
request/3Estimate the cost of a transaction and decide whether to proceed with a request.
response/3Postprocess the request after it has been fulfilled.
test_opts/1*
test_opts/2*
test_opts/3*
## Function Details ## ### balance/3 ### `balance(X1, Req, NodeMsg) -> any()` Get the balance of a user in the ledger. ### faff_test/0 * ### `faff_test() -> any()` Simple test of p4's capabilities with the `faff@1.0` device. ### hyper_token_ledger/0 * ### `hyper_token_ledger() -> any()` ### hyper_token_ledger_test_/0 * ### `hyper_token_ledger_test_() -> any()` Ensure that Lua scripts can be used as pricing and ledger devices. Our scripts come in two components: 1. A `process` script which is executed as a persistent `local-process` on the node, and which maintains the state of the ledger. This process runs `hyper-token.lua` as its base, then adds the logic of `hyper-token-p4.lua` to it. This secondary script implements the `charge` function that `p4@1.0` will call to charge a user's account. 2. A `client` script, which is executed as a `p4@1.0` ledger device, which uses `~push@1.0` to send requests to the ledger `process`. ### is_chargable_req/2 * ### `is_chargable_req(Req, NodeMsg) -> any()` The node operator may elect to make certain routes non-chargable, using the `routes` syntax also used to declare routes in `router@1.0`. ### non_chargable_route_test/0 * ### `non_chargable_route_test() -> any()` Test that a non-chargable route is not charged for. ### request/3 ### `request(State, Raw, NodeMsg) -> any()` Estimate the cost of a transaction and decide whether to proceed with a request. The default behavior if `pricing-device` or `p4_balances` are not set is to proceed, so it is important that a user initialize them. ### response/3 ### `response(State, RawResponse, NodeMsg) -> any()` Postprocess the request after it has been fulfilled. ### test_opts/1 * ### `test_opts(Opts) -> any()` ### test_opts/2 * ### `test_opts(Opts, PricingDev) -> any()` ### test_opts/3 * ### `test_opts(Opts, PricingDev, LedgerDev) -> any()` --- END OF FILE: docs/resources/source-code/dev_p4.md --- --- START OF FILE: docs/resources/source-code/dev_patch.md --- # [Module dev_patch.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_patch.erl) A device that can be used to reorganize a message: Moving data from one path inside it to another. ## Description ## This device's function runs in two modes: 1. When using `all` to move all data at the path given in `from` to the path given in `to`. 2. When using `patches` to move all submessages in the source to the target, _if_ they have a `method` key of `PATCH` or a `device` key of `patch@1.0`. Source and destination paths may be prepended by `base:` or `req:` keys to indicate that they are relative to either of the message's that the computation is being performed on. The search order for finding the source and destination keys is as follows, where `X` is either `from` or `to`: 1. The `patch-X` key of the execution message. 2. The `X` key of the execution message. 3. The `patch-X` key of the request message. 4. The `X` key of the request message. Additionally, this device implements the standard computation device keys, allowing it to be used as an element of an execution stack pipeline, etc. ## Function Index ##
all/3Get the value found at the patch-from key of the message, or the from key if the former is not present.
all_mode_test/0*
compute/3
init/3Necessary hooks for compliance with the execution-device standard.
move/4*Unified executor for the all and patches modes.
normalize/3
patch_to_submessage_test/0*
patches/3Find relevant PATCH messages in the given source key of the execution and request messages, and apply them to the given destination key of the request.
req_prefix_test/0*
snapshot/3
uninitialized_patch_test/0*
## Function Details ## ### all/3 ### `all(Msg1, Msg2, Opts) -> any()` Get the value found at the `patch-from` key of the message, or the `from` key if the former is not present. Remove it from the message and set the new source to the value found. ### all_mode_test/0 * ### `all_mode_test() -> any()` ### compute/3 ### `compute(Msg1, Msg2, Opts) -> any()` ### init/3 ### `init(Msg1, Msg2, Opts) -> any()` Necessary hooks for compliance with the `execution-device` standard. ### move/4 * ### `move(Mode, Msg1, Msg2, Opts) -> any()` Unified executor for the `all` and `patches` modes. ### normalize/3 ### `normalize(Msg1, Msg2, Opts) -> any()` ### patch_to_submessage_test/0 * ### `patch_to_submessage_test() -> any()` ### patches/3 ### `patches(Msg1, Msg2, Opts) -> any()` Find relevant `PATCH` messages in the given source key of the execution and request messages, and apply them to the given destination key of the request. ### req_prefix_test/0 * ### `req_prefix_test() -> any()` ### snapshot/3 ### `snapshot(Msg1, Msg2, Opts) -> any()` ### uninitialized_patch_test/0 * ### `uninitialized_patch_test() -> any()` --- END OF FILE: docs/resources/source-code/dev_patch.md --- --- START OF FILE: docs/resources/source-code/dev_poda.md --- # [Module dev_poda.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_poda.erl) A simple exemplar decentralized proof of authority consensus algorithm A simple exemplar decentralized proof of authority consensus algorithm for AO processes. ## Description ## This device is split into two flows, spanning three actions. Execution flow: 1. Initialization. 2. Validation of incoming messages before execution. Commitment flow: 1. Adding commitments to results, either on a CU or MU. ## Function Index ##
add_commitments/3*
commit_to_results/3*
execute/3
extract_opts/1*
find_process/2*Find the process that this message is targeting, in order to determine which commitments to add.
init/2
is_user_signed/1Determines if a user committed.
pfiltermap/2*Helper function for parallel execution of commitment gathering.
push/3Hook used by the MU pathway (currently) to add commitments to an outbound message if the computation requests it.
return_error/2*
validate/2*
validate_commitment/3*
validate_stage/3*
validate_stage/4*
## Function Details ## ### add_commitments/3 * ### `add_commitments(NewMsg, S, Opts) -> any()` ### commit_to_results/3 * ### `commit_to_results(Msg, S, Opts) -> any()` ### execute/3 ### `execute(Outer, S, Opts) -> any()` ### extract_opts/1 * ### `extract_opts(Params) -> any()` ### find_process/2 * ### `find_process(Item, X2) -> any()` Find the process that this message is targeting, in order to determine which commitments to add. ### init/2 ### `init(S, Params) -> any()` ### is_user_signed/1 ### `is_user_signed(Tx) -> any()` Determines if a user committed ### pfiltermap/2 * ### `pfiltermap(Pred, List) -> any()` Helper function for parallel execution of commitment gathering. ### push/3 ### `push(Item, S, Opts) -> any()` Hook used by the MU pathway (currently) to add commitments to an outbound message if the computation requests it. ### return_error/2 * ### `return_error(S, Reason) -> any()` ### validate/2 * ### `validate(Msg, Opts) -> any()` ### validate_commitment/3 * ### `validate_commitment(Msg, Comm, Opts) -> any()` ### validate_stage/3 * ### `validate_stage(X1, Msg, Opts) -> any()` ### validate_stage/4 * ### `validate_stage(X1, Tx, Content, Opts) -> any()` --- END OF FILE: docs/resources/source-code/dev_poda.md --- --- START OF FILE: docs/resources/source-code/dev_process_cache.md --- # [Module dev_process_cache.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_process_cache.erl) A wrapper around the hb_cache module that provides a more convenient interface for reading the result of a process at a given slot or message ID. ## Function Index ##
find_latest_outputs/1*Test for retrieving the latest computed output for a process.
first_with_path/4*Find the latest assignment with the requested path suffix.
first_with_path/5*
latest/2Retrieve the latest slot for a given process.
latest/3
latest/4
path/3*Calculate the path of a result, given a process ID and a slot.
path/4*
process_cache_suite_test_/0*
read/2Read the result of a process at a given slot.
read/3
test_write_and_read_output/1*Test for writing multiple computed outputs, then getting them by their slot number and by their signed and unsigned IDs.
write/4Write a process computation result to the cache.
## Function Details ## ### find_latest_outputs/1 * ### `find_latest_outputs(Opts) -> any()` Test for retrieving the latest computed output for a process. ### first_with_path/4 * ### `first_with_path(ProcID, RequiredPath, Slots, Opts) -> any()` Find the latest assignment with the requested path suffix. ### first_with_path/5 * ### `first_with_path(ProcID, Required, Rest, Opts, Store) -> any()` ### latest/2 ### `latest(ProcID, Opts) -> any()` Retrieve the latest slot for a given process. Optionally state a limit on the slot number to search for, as well as a required path that the slot must have. ### latest/3 ### `latest(ProcID, RequiredPath, Opts) -> any()` ### latest/4 ### `latest(ProcID, RawRequiredPath, Limit, Opts) -> any()` ### path/3 * ### `path(ProcID, Ref, Opts) -> any()` Calculate the path of a result, given a process ID and a slot. ### path/4 * ### `path(ProcID, Ref, PathSuffix, Opts) -> any()` ### process_cache_suite_test_/0 * ### `process_cache_suite_test_() -> any()` ### read/2 ### `read(ProcID, Opts) -> any()` Read the result of a process at a given slot. ### read/3 ### `read(ProcID, SlotRef, Opts) -> any()` ### test_write_and_read_output/1 * ### `test_write_and_read_output(Opts) -> any()` Test for writing multiple computed outputs, then getting them by their slot number and by their signed and unsigned IDs. ### write/4 ### `write(ProcID, Slot, Msg, Opts) -> any()` Write a process computation result to the cache. --- END OF FILE: docs/resources/source-code/dev_process_cache.md --- --- START OF FILE: docs/resources/source-code/dev_process_worker.md --- # [Module dev_process_worker.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_process_worker.erl) A long-lived process worker that keeps state in memory between calls. ## Description ## Implements the interface of `hb_ao` to receive and respond to computation requests regarding a process as a singleton. ## Function Index ##
await/5Await a resolution from a worker executing the process@1.0 device.
group/3Returns a group name for a request.
grouper_test/0*
info_test/0*
notify_compute/4Notify any waiters for a specific slot of the computed results.
notify_compute/5*
process_to_group_name/2*
send_notification/4*
server/3Spawn a new worker process.
stop/1Stop a worker process.
test_init/0*
## Function Details ## ### await/5 ### `await(Worker, GroupName, Msg1, Msg2, Opts) -> any()` Await a resolution from a worker executing the `process@1.0` device. ### group/3 ### `group(Msg1, Msg2, Opts) -> any()` Returns a group name for a request. The worker is responsible for all computation work on the same process on a single node, so we use the process ID as the group name. ### grouper_test/0 * ### `grouper_test() -> any()` ### info_test/0 * ### `info_test() -> any()` ### notify_compute/4 ### `notify_compute(GroupName, SlotToNotify, Msg3, Opts) -> any()` Notify any waiters for a specific slot of the computed results. ### notify_compute/5 * ### `notify_compute(GroupName, SlotToNotify, Msg3, Opts, Count) -> any()` ### process_to_group_name/2 * ### `process_to_group_name(Msg1, Opts) -> any()` ### send_notification/4 * ### `send_notification(Listener, GroupName, SlotToNotify, Msg3) -> any()` ### server/3 ### `server(GroupName, Msg1, Opts) -> any()` Spawn a new worker process. This is called after the end of the first execution of `hb_ao:resolve/3`, so the state we are given is the already current. ### stop/1 ### `stop(Worker) -> any()` Stop a worker process. ### test_init/0 * ### `test_init() -> any()` --- END OF FILE: docs/resources/source-code/dev_process_worker.md --- --- START OF FILE: docs/resources/source-code/dev_process.md --- # [Module dev_process.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_process.erl) This module contains the device implementation of AO processes in AO-Core. ## Description ## The core functionality of the module is in 'routing' requests for different functionality (scheduling, computing, and pushing messages) to the appropriate device. This is achieved by swapping out the device of the process message with the necessary component in order to run the execution, then swapping it back before returning. Computation is supported as a stack of devices, customizable by the user, while the scheduling device is (by default) a single device. This allows the devices to share state as needed. Additionally, after each computation step the device caches the result at a path relative to the process definition itself, such that the process message's ID can act as an immutable reference to the process's growing list of interactions. See `dev_process_cache` for details. The external API of the device is as follows: ``` GET /ID/Schedule: Returns the messages in the schedule POST /ID/Schedule: Adds a message to the schedule GET /ID/Compute/[IDorSlotNum]: Returns the state of the process after applying a message GET /ID/Now: Returns the /Results key of the latest computed message ``` An example process definition will look like this: ``` Device: Process/1.0 Scheduler-Device: Scheduler/1.0 Execution-Device: Stack/1.0 Execution-Stack: "Scheduler/1.0", "Cron/1.0", "WASM/1.0", "PoDA/1.0" Cron-Frequency: 10-Minutes WASM-Image: WASMImageID PoDA: Device: PoDA/1.0 Authority: A Authority: B Authority: C Quorum: 2 ``` Runtime options: Cache-Frequency: The number of assignments that will be computed before the full (restorable) state should be cached. Cache-Keys: A list of the keys that should be cached for all assignments, in addition to `/Results`. ## Function Index ##
aos_browsable_state_test_/0*
aos_compute_test_/0*
aos_persistent_worker_benchmark_test_/0*
aos_state_access_via_http_test_/0*
aos_state_patch_test_/0*
as_process/2Change the message to for that has the device set as this module.
compute/3Compute the result of an assignment applied to the process state, if it is the next message.
compute_slot/5*Compute a single slot for a process, given an initialized state.
compute_to_slot/5*Continually get and apply the next assignment from the scheduler until we reach the target slot that the user has requested.
default_device/3*Returns the default device for a given piece of functionality.
default_device_index/1*
dev_test_process/0Generate a device that has a stack of two dev_tests for execution.
do_test_restore/0
ensure_loaded/3*Ensure that the process message we have in memory is live and up-to-date.
ensure_process_key/2Helper function to store a copy of the process key in the message.
get_scheduler_slot_test/0*
http_wasm_process_by_id_test/0*
info/1When the info key is called, we should return the process exports.
init/0
init/3*Before computation begins, a boot phase is required.
next/3*
now/3Returns the known state of the process at either the current slot, or the latest slot in the cache depending on the process_now_from_cache option.
now_results_test_/0*
persistent_process_test/0*
prior_results_accessible_test_/0*
process_id/3Returns the process ID of the current process.
push/3Recursively push messages to the scheduler until we find a message that does not lead to any further messages being scheduled.
recursive_path_resolution_test/0*
restore_test_/0*Manually test state restoration without using the cache.
run_as/4*Run a message against Msg1, with the device being swapped out for the device found at Key.
schedule/3Wraps functions in the Scheduler device.
schedule_aos_call/2
schedule_aos_call/3
schedule_on_process_test_/0*
schedule_test_message/3*
schedule_test_message/4*
schedule_wasm_call/3*
schedule_wasm_call/4*
simple_wasm_persistent_worker_benchmark_test/0*
slot/3
snapshot/3
store_result/5*Store the resulting state in the cache, potentially with the snapshot key.
test_aos_process/0Generate a process message with a random number, and the dev_wasm device for execution.
test_aos_process/1
test_aos_process/2*
test_base_process/0*Generate a process message with a random number, and no executor.
test_base_process/1*
test_device_compute_test/0*
test_wasm_process/1
test_wasm_process/2*
wasm_compute_from_id_test/0*
wasm_compute_test/0*
## Function Details ## ### aos_browsable_state_test_/0 * ### `aos_browsable_state_test_() -> any()` ### aos_compute_test_/0 * ### `aos_compute_test_() -> any()` ### aos_persistent_worker_benchmark_test_/0 * ### `aos_persistent_worker_benchmark_test_() -> any()` ### aos_state_access_via_http_test_/0 * ### `aos_state_access_via_http_test_() -> any()` ### aos_state_patch_test_/0 * ### `aos_state_patch_test_() -> any()` ### as_process/2 ### `as_process(Msg1, Opts) -> any()` Change the message to for that has the device set as this module. In situations where the key that is `run_as` returns a message with a transformed device, this is useful. ### compute/3 ### `compute(Msg1, Msg2, Opts) -> any()` Compute the result of an assignment applied to the process state, if it is the next message. ### compute_slot/5 * ### `compute_slot(ProcID, State, RawInputMsg, ReqMsg, Opts) -> any()` Compute a single slot for a process, given an initialized state. ### compute_to_slot/5 * ### `compute_to_slot(ProcID, Msg1, Msg2, TargetSlot, Opts) -> any()` Continually get and apply the next assignment from the scheduler until we reach the target slot that the user has requested. ### default_device/3 * ### `default_device(Msg1, Key, Opts) -> any()` Returns the default device for a given piece of functionality. Expects the `process/variant` key to be set in the message. The `execution-device` _must_ be set in all processes aside those marked with `ao.TN.1` variant. This is in order to ensure that post-mainnet processes do not default to using infrastructure that should not be present on nodes in the future. ### default_device_index/1 * ### `default_device_index(X1) -> any()` ### dev_test_process/0 ### `dev_test_process() -> any()` Generate a device that has a stack of two `dev_test`s for execution. This should generate a message state has doubled `Already-Seen` elements for each assigned slot. ### do_test_restore/0 ### `do_test_restore() -> any()` ### ensure_loaded/3 * ### `ensure_loaded(Msg1, Msg2, Opts) -> any()` Ensure that the process message we have in memory is live and up-to-date. ### ensure_process_key/2 ### `ensure_process_key(Msg1, Opts) -> any()` Helper function to store a copy of the `process` key in the message. ### get_scheduler_slot_test/0 * ### `get_scheduler_slot_test() -> any()` ### http_wasm_process_by_id_test/0 * ### `http_wasm_process_by_id_test() -> any()` ### info/1 ### `info(Msg1) -> any()` When the info key is called, we should return the process exports. ### init/0 ### `init() -> any()` ### init/3 * ### `init(Msg1, Msg2, Opts) -> any()` Before computation begins, a boot phase is required. This phase allows devices on the execution stack to initialize themselves. We set the `Initialized` key to `True` to indicate that the process has been initialized. ### next/3 * ### `next(Msg1, Msg2, Opts) -> any()` ### now/3 ### `now(RawMsg1, Msg2, Opts) -> any()` Returns the known state of the process at either the current slot, or the latest slot in the cache depending on the `process_now_from_cache` option. ### now_results_test_/0 * ### `now_results_test_() -> any()` ### persistent_process_test/0 * ### `persistent_process_test() -> any()` ### prior_results_accessible_test_/0 * ### `prior_results_accessible_test_() -> any()` ### process_id/3 ### `process_id(Msg1, Msg2, Opts) -> any()` Returns the process ID of the current process. ### push/3 ### `push(Msg1, Msg2, Opts) -> any()` Recursively push messages to the scheduler until we find a message that does not lead to any further messages being scheduled. ### recursive_path_resolution_test/0 * ### `recursive_path_resolution_test() -> any()` ### restore_test_/0 * ### `restore_test_() -> any()` Manually test state restoration without using the cache. ### run_as/4 * ### `run_as(Key, Msg1, Msg2, Opts) -> any()` Run a message against Msg1, with the device being swapped out for the device found at `Key`. After execution, the device is swapped back to the original device if the device is the same as we left it. ### schedule/3 ### `schedule(Msg1, Msg2, Opts) -> any()` Wraps functions in the Scheduler device. ### schedule_aos_call/2 ### `schedule_aos_call(Msg1, Code) -> any()` ### schedule_aos_call/3 ### `schedule_aos_call(Msg1, Code, Opts) -> any()` ### schedule_on_process_test_/0 * ### `schedule_on_process_test_() -> any()` ### schedule_test_message/3 * ### `schedule_test_message(Msg1, Text, Opts) -> any()` ### schedule_test_message/4 * ### `schedule_test_message(Msg1, Text, MsgBase, Opts) -> any()` ### schedule_wasm_call/3 * ### `schedule_wasm_call(Msg1, FuncName, Params) -> any()` ### schedule_wasm_call/4 * ### `schedule_wasm_call(Msg1, FuncName, Params, Opts) -> any()` ### simple_wasm_persistent_worker_benchmark_test/0 * ### `simple_wasm_persistent_worker_benchmark_test() -> any()` ### slot/3 ### `slot(Msg1, Msg2, Opts) -> any()` ### snapshot/3 ### `snapshot(RawMsg1, Msg2, Opts) -> any()` ### store_result/5 * ### `store_result(ProcID, Slot, Msg3, Msg2, Opts) -> any()` Store the resulting state in the cache, potentially with the snapshot key. ### test_aos_process/0 ### `test_aos_process() -> any()` Generate a process message with a random number, and the `dev_wasm` device for execution. ### test_aos_process/1 ### `test_aos_process(Opts) -> any()` ### test_aos_process/2 * ### `test_aos_process(Opts, Stack) -> any()` ### test_base_process/0 * ### `test_base_process() -> any()` Generate a process message with a random number, and no executor. ### test_base_process/1 * ### `test_base_process(Opts) -> any()` ### test_device_compute_test/0 * ### `test_device_compute_test() -> any()` ### test_wasm_process/1 ### `test_wasm_process(WASMImage) -> any()` ### test_wasm_process/2 * ### `test_wasm_process(WASMImage, Opts) -> any()` ### wasm_compute_from_id_test/0 * ### `wasm_compute_from_id_test() -> any()` ### wasm_compute_test/0 * ### `wasm_compute_test() -> any()` --- END OF FILE: docs/resources/source-code/dev_process.md --- --- START OF FILE: docs/resources/source-code/dev_push.md --- # [Module dev_push.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_push.erl) `push@1.0` takes a message or slot number, evaluates it, and recursively pushes the resulting messages to other processes. ## Description ## The `push`ing mechanism continues until the there are no remaining messages to push. ## Function Index ##
apply_security/4*Apply the recipient's security policy to the message.
apply_security/5*
augment_message/3*Set the necessary keys in order for the recipient to know where the message came from.
calculate_base_id/2*Calculate the base ID for a process.
commit_result/4*Attempt to sign a result message with the given committers.
do_push/3*Push a message or slot number, including its downstream results.
extract/2*Return either the target or the hint.
find_type/2*
full_push_test_/0*
is_async/3*Determine if the push is asynchronous.
message_to_legacynet_scheduler_script/0*
multi_process_push_test_/0*
normalize_message/2*Augment the message with from-* keys, if it doesn't already have them.
parse_redirect/2*
ping_pong_script/1*Test that a message that generates another message which resides on an ANS-104 scheduler leads to ~push@1.0 re-signing the message correctly.
push/3Push either a message or an assigned slot number.
push_as_identity_test_/0*
push_prompts_encoding_change/0*
push_prompts_encoding_change_test_/0*
push_result_message/4*Push a downstream message result.
push_with_mode/3*
push_with_redirect_hint_test_disabled/0*
remote_schedule_result/3*
reply_script/0*
schedule_initial_message/3*Push a message or a process, prior to pushing the resulting slot number.
schedule_result/4*Add the necessary keys to the message to be scheduled, then schedule it.
schedule_result/5*
split_target/1*Split the target into the process ID and the optional query string.
target_process/2*Find the target process ID for a message to push.
## Function Details ## ### apply_security/4 * ### `apply_security(Msg, TargetProcess, Codec, Opts) -> any()` Apply the recipient's security policy to the message. Observes the following parameters in order to calculate the appropriate security policy: - `policy`: A message that generates a security policy message. - `authority`: A single committer, or list of comma separated committers. - (Default: Signs with default wallet) ### apply_security/5 * ### `apply_security(X1, Msg, TargetProcess, Codec, Opts) -> any()` ### augment_message/3 * ### `augment_message(Origin, ToSched, Opts) -> any()` Set the necessary keys in order for the recipient to know where the message came from. ### calculate_base_id/2 * ### `calculate_base_id(GivenProcess, Opts) -> any()` Calculate the base ID for a process. The base ID is not just the uncommitted process ID. It also excludes the `authority` and `scheduler` keys. ### commit_result/4 * ### `commit_result(Msg, Committers, Codec, Opts) -> any()` Attempt to sign a result message with the given committers. ### do_push/3 * ### `do_push(PrimaryProcess, Assignment, Opts) -> any()` Push a message or slot number, including its downstream results. ### extract/2 * ### `extract(X1, Raw) -> any()` Return either the `target` or the `hint`. ### find_type/2 * ### `find_type(Req, Opts) -> any()` ### full_push_test_/0 * ### `full_push_test_() -> any()` ### is_async/3 * ### `is_async(Process, Req, Opts) -> any()` Determine if the push is asynchronous. ### message_to_legacynet_scheduler_script/0 * ### `message_to_legacynet_scheduler_script() -> any()` ### multi_process_push_test_/0 * ### `multi_process_push_test_() -> any()` ### normalize_message/2 * ### `normalize_message(MsgToPush, Opts) -> any()` Augment the message with from-* keys, if it doesn't already have them. ### parse_redirect/2 * ### `parse_redirect(Location, Opts) -> any()` ### ping_pong_script/1 * ### `ping_pong_script(Limit) -> any()` Test that a message that generates another message which resides on an ANS-104 scheduler leads to `~push@1.0` re-signing the message correctly. Requires `ENABLE_GENESIS_WASM` to be enabled. ### push/3 ### `push(Base, Req, Opts) -> any()` Push either a message or an assigned slot number. If a `Process` is provided in the `body` of the request, it will be scheduled (initializing it if it does not exist). Otherwise, the message specified by the given `slot` key will be pushed. Optional parameters: `/result-depth`: The depth to which the full contents of the result will be included in the response. Default: 1, returning the full result of the first message, but only the 'tree' of downstream messages. `/push-mode`: Whether or not the push should be done asynchronously. Default: `sync`, pushing synchronously. ### push_as_identity_test_/0 * ### `push_as_identity_test_() -> any()` ### push_prompts_encoding_change/0 * ### `push_prompts_encoding_change() -> any()` ### push_prompts_encoding_change_test_/0 * ### `push_prompts_encoding_change_test_() -> any()` ### push_result_message/4 * ### `push_result_message(TargetProcess, MsgToPush, Origin, Opts) -> any()` Push a downstream message result. The `Origin` map contains information about the origin of the message: The process that originated the message, the slot number from which it was sent, and the outbox key of the message, and the depth to which downstream results should be included in the message. ### push_with_mode/3 * ### `push_with_mode(Process, Req, Opts) -> any()` ### push_with_redirect_hint_test_disabled/0 * ### `push_with_redirect_hint_test_disabled() -> any()` ### remote_schedule_result/3 * ### `remote_schedule_result(Location, SignedReq, Opts) -> any()` ### reply_script/0 * ### `reply_script() -> any()` ### schedule_initial_message/3 * ### `schedule_initial_message(Base, Req, Opts) -> any()` Push a message or a process, prior to pushing the resulting slot number. ### schedule_result/4 * ### `schedule_result(TargetProcess, MsgToPush, Origin, Opts) -> any()` Add the necessary keys to the message to be scheduled, then schedule it. If the remote scheduler does not support the given codec, it will be downgraded and re-signed. ### schedule_result/5 * ### `schedule_result(TargetProcess, MsgToPush, Codec, Origin, Opts) -> any()` ### split_target/1 * ### `split_target(RawTarget) -> any()` Split the target into the process ID and the optional query string. ### target_process/2 * ### `target_process(MsgToPush, Opts) -> any()` Find the target process ID for a message to push. --- END OF FILE: docs/resources/source-code/dev_push.md --- --- START OF FILE: docs/resources/source-code/dev_relay.md --- # [Module dev_relay.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_relay.erl) This module implements the relay device, which is responsible for relaying messages between nodes and other HTTP(S) endpoints. ## Description ## It can be called in either `call` or `cast` mode. In `call` mode, it returns a `{ok, Result}` tuple, where `Result` is the response from the remote peer to the message sent. In `cast` mode, the invocation returns immediately, and the message is relayed asynchronously. No response is given and the device returns `{ok, <<"OK">>}`. Example usage: ``` curl /~relay@.1.0/call?method=GET?0.path=https://www.arweave.net/ ``` ## Function Index ##
call/3Execute a call request using a node's routes.
call_get_test/0*
cast/3Execute a request in the same way as call/3, but asynchronously.
commit_request_test/0*Test that a relay@1.0/call correctly commits requests as specified.
relay_nearest_test/0*
request/3Preprocess a request to check if it should be relayed to a different node.
## Function Details ## ### call/3 ### `call(M1, RawM2, Opts) -> any()` Execute a `call` request using a node's routes. Supports the following options: - `target`: The target message to relay. Defaults to the original message. - `relay-path`: The path to relay the message to. Defaults to the original path. - `method`: The method to use for the request. Defaults to the original method. - `commit-request`: Whether the request should be committed before dispatching. Defaults to `false`. ### call_get_test/0 * ### `call_get_test() -> any()` ### cast/3 ### `cast(M1, M2, Opts) -> any()` Execute a request in the same way as `call/3`, but asynchronously. Always returns `<<"OK">>`. ### commit_request_test/0 * ### `commit_request_test() -> any()` Test that a `relay@1.0/call` correctly commits requests as specified. We validate this by configuring two nodes: One that will execute a given request from a user, but only if the request is committed. The other node re-routes all requests to the first node, using `call`'s `commit-request` key to sign the request during proxying. The initial request is not signed, such that the first node would otherwise reject the request outright. ### relay_nearest_test/0 * ### `relay_nearest_test() -> any()` ### request/3 ### `request(Msg1, Msg2, Opts) -> any()` Preprocess a request to check if it should be relayed to a different node. --- END OF FILE: docs/resources/source-code/dev_relay.md --- --- START OF FILE: docs/resources/source-code/dev_router.md --- # [Module dev_router.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_router.erl) A device that routes outbound messages from the node to their appropriate network recipients via HTTP. ## Description ## All messages are initially routed to a single process per node, which then load-balances them between downstream workers that perform the actual requests. The routes for the router are defined in the `routes` key of the `Opts`, as a precidence-ordered list of maps. The first map that matches the message will be used to determine the route. Multiple nodes can be specified as viable for a single route, with the `Choose` key determining how many nodes to choose from the list (defaulting to 1). The `Strategy` key determines the load distribution strategy, which can be one of `Random`, `By-Base`, or `Nearest`. The route may also define additional parallel execution parameters, which are used by the `hb_http` module to manage control of requests. The structure of the routes should be as follows: ``` Node?: The node to route the message to. Nodes?: A list of nodes to route the message to. Strategy?: The load distribution strategy to use. Choose?: The number of nodes to choose from the list. Template?: A message template to match the message against, either as a map or a path regex. ``` ## Function Index ##
add_route_test/0*
apply_route/3*Apply a node map's rules for transforming the path of the message.
apply_routes/3*Generate a uri key for each node in a route.
binary_to_bignum/1*Cast a human-readable or native-encoded ID to a big integer.
by_base_determinism_test/0*Ensure that By-Base always chooses the same node for the same hashpath.
choose/5*Implements the load distribution strategies if given a cluster.
choose_1_test/1*
choose_n_test/1*
device_call_from_singleton_test/0*
do_apply_route/3*
dynamic_route_provider_test/0*
dynamic_router/0*
dynamic_router_test_/0*Example of a Lua module being used as the route_provider for a HyperBEAM node.
dynamic_routing_by_performance/0*
dynamic_routing_by_performance_test_/0*Demonstrates routing tables being dynamically created and adjusted according to the real-time performance of nodes.
explicit_route_test/0*
extract_base/2*Extract the base message ID from a request message.
field_distance/2*Calculate the minimum distance between two numbers (either progressing backwards or forwards), assuming a 256-bit field.
find_target_path/2*Find the target path to route for a request message.
generate_hashpaths/1*
generate_nodes/1*
get_routes_test/0*
info/1Exported function for getting device info, controls which functions are exposed via the device API.
info/3HTTP info response providing information about this device.
load_routes/1*Load the current routes for the node.
local_dynamic_router/0*
local_dynamic_router_test_/0*Example of a Lua module being used as the route_provider for a HyperBEAM node.
local_process_route_provider/0*
local_process_route_provider_test_/0*
lowest_distance/1*Find the node with the lowest distance to the given hashpath.
lowest_distance/2*
match/3Find the first matching template in a list of known routes.
match_routes/3*
match_routes/4*
preprocess/3Preprocess a request to check if it should be relayed to a different node.
register/3Register function that allows telling the current node to register a new route with a remote router node.
request_hook_reroute_to_nearest_test/0*Test that the preprocess/3 function re-routes a request to remote peers via ~relay@1.0, according to the node's routing table.
route/2Find the appropriate route for the given message.
route/3
route_provider_test/0*
route_regex_matches_test/0*
route_template_message_matches_test/0*
routes/3Device function that returns all known routes.
simulate/4*
simulation_distribution/2*
simulation_occurences/2*
strategy_suite_test_/0*
template_matches/3*Check if a message matches a message template or path regex.
unique_nodes/1*
unique_test/1*
weighted_random_strategy_test/0*
within_norms/3*
## Function Details ## ### add_route_test/0 * ### `add_route_test() -> any()` ### apply_route/3 * ### `apply_route(Msg, Route, Opts) -> any()` Apply a node map's rules for transforming the path of the message. Supports the following keys: - `opts`: A map of options to pass to the request. - `prefix`: The prefix to add to the path. - `suffix`: The suffix to add to the path. - `replace`: A regex to replace in the path. ### apply_routes/3 * ### `apply_routes(Msg, R, Opts) -> any()` Generate a `uri` key for each node in a route. ### binary_to_bignum/1 * ### `binary_to_bignum(Bin) -> any()` Cast a human-readable or native-encoded ID to a big integer. ### by_base_determinism_test/0 * ### `by_base_determinism_test() -> any()` Ensure that `By-Base` always chooses the same node for the same hashpath. ### choose/5 * ### `choose(N, X2, Hashpath, Nodes, Opts) -> any()` Implements the load distribution strategies if given a cluster. ### choose_1_test/1 * ### `choose_1_test(Strategy) -> any()` ### choose_n_test/1 * ### `choose_n_test(Strategy) -> any()` ### device_call_from_singleton_test/0 * ### `device_call_from_singleton_test() -> any()` ### do_apply_route/3 * ### `do_apply_route(X1, R, Opts) -> any()` ### dynamic_route_provider_test/0 * ### `dynamic_route_provider_test() -> any()` ### dynamic_router/0 * ### `dynamic_router() -> any()` ### dynamic_router_test_/0 * ### `dynamic_router_test_() -> any()` Example of a Lua module being used as the `route_provider` for a HyperBEAM node. The module utilized in this example dynamically adjusts the likelihood of routing to a given node, depending upon price and performance. also include preprocessing support for routing ### dynamic_routing_by_performance/0 * ### `dynamic_routing_by_performance() -> any()` ### dynamic_routing_by_performance_test_/0 * ### `dynamic_routing_by_performance_test_() -> any()` Demonstrates routing tables being dynamically created and adjusted according to the real-time performance of nodes. This test utilizes the `dynamic-router` script to manage routes and recalculate weights based on the reported performance. ### explicit_route_test/0 * ### `explicit_route_test() -> any()` ### extract_base/2 * ### `extract_base(RawPath, Opts) -> any()` Extract the base message ID from a request message. Produces a single binary ID that can be used for routing decisions. ### field_distance/2 * ### `field_distance(A, B) -> any()` Calculate the minimum distance between two numbers (either progressing backwards or forwards), assuming a 256-bit field. ### find_target_path/2 * ### `find_target_path(Msg, Opts) -> any()` Find the target path to route for a request message. ### generate_hashpaths/1 * ### `generate_hashpaths(Runs) -> any()` ### generate_nodes/1 * ### `generate_nodes(N) -> any()` ### get_routes_test/0 * ### `get_routes_test() -> any()` ### info/1 ### `info(X1) -> any()` Exported function for getting device info, controls which functions are exposed via the device API. ### info/3 ### `info(Msg1, Msg2, Opts) -> any()` HTTP info response providing information about this device ### load_routes/1 * ### `load_routes(Opts) -> any()` Load the current routes for the node. Allows either explicit routes from the node message's `routes` key, or dynamic routes generated by resolving the `route_provider` message. ### local_dynamic_router/0 * ### `local_dynamic_router() -> any()` ### local_dynamic_router_test_/0 * ### `local_dynamic_router_test_() -> any()` Example of a Lua module being used as the `route_provider` for a HyperBEAM node. The module utilized in this example dynamically adjusts the likelihood of routing to a given node, depending upon price and performance. ### local_process_route_provider/0 * ### `local_process_route_provider() -> any()` ### local_process_route_provider_test_/0 * ### `local_process_route_provider_test_() -> any()` ### lowest_distance/1 * ### `lowest_distance(Nodes) -> any()` Find the node with the lowest distance to the given hashpath. ### lowest_distance/2 * ### `lowest_distance(Nodes, X) -> any()` ### match/3 ### `match(Base, Req, Opts) -> any()` Find the first matching template in a list of known routes. Allows the path to be specified by either the explicit `path` (for internal use by this module), or `route-path` for use by external devices and users. ### match_routes/3 * ### `match_routes(ToMatch, Routes, Opts) -> any()` ### match_routes/4 * ### `match_routes(ToMatch, Routes, Keys, Opts) -> any()` ### preprocess/3 ### `preprocess(Msg1, Msg2, Opts) -> any()` Preprocess a request to check if it should be relayed to a different node. ### register/3 ### `register(M1, M2, Opts) -> any()` Register function that allows telling the current node to register a new route with a remote router node. This function should also be idempotent. so that it can be called only once. ### request_hook_reroute_to_nearest_test/0 * ### `request_hook_reroute_to_nearest_test() -> any()` Test that the `preprocess/3` function re-routes a request to remote peers via `~relay@1.0`, according to the node's routing table. ### route/2 ### `route(Msg, Opts) -> any()` Find the appropriate route for the given message. If we are able to resolve to a single host+path, we return that directly. Otherwise, we return the matching route (including a list of nodes under `nodes`) from the list of routes. If we have a route that has multiple resolving nodes, check the load distribution strategy and choose a node. Supported strategies: ``` All: Return all nodes (default). Random: Distribute load evenly across all nodes, non-deterministically. By-Base: According to the base message's hashpath. By-Weight: According to the node's weight key. Nearest: According to the distance of the node's wallet address to the base message's hashpath. ``` `By-Base` will ensure that all traffic for the same hashpath is routed to the same node, minimizing work duplication, while `Random` ensures a more even distribution of the requests. Can operate as a `~router@1.0` device, which will ignore the base message, routing based on the Opts and request message provided, or as a standalone function, taking only the request message and the `Opts` map. ### route/3 ### `route(X1, Msg, Opts) -> any()` ### route_provider_test/0 * ### `route_provider_test() -> any()` ### route_regex_matches_test/0 * ### `route_regex_matches_test() -> any()` ### route_template_message_matches_test/0 * ### `route_template_message_matches_test() -> any()` ### routes/3 ### `routes(M1, M2, Opts) -> any()` Device function that returns all known routes. ### simulate/4 * ### `simulate(Runs, ChooseN, Nodes, Strategy) -> any()` ### simulation_distribution/2 * ### `simulation_distribution(SimRes, Nodes) -> any()` ### simulation_occurences/2 * ### `simulation_occurences(SimRes, Nodes) -> any()` ### strategy_suite_test_/0 * ### `strategy_suite_test_() -> any()` ### template_matches/3 * ### `template_matches(ToMatch, Template, Opts) -> any()` Check if a message matches a message template or path regex. ### unique_nodes/1 * ### `unique_nodes(Simulation) -> any()` ### unique_test/1 * ### `unique_test(Strategy) -> any()` ### weighted_random_strategy_test/0 * ### `weighted_random_strategy_test() -> any()` ### within_norms/3 * ### `within_norms(SimRes, Nodes, TestSize) -> any()` --- END OF FILE: docs/resources/source-code/dev_router.md --- --- START OF FILE: docs/resources/source-code/dev_scheduler_cache.md --- # [Module dev_scheduler_cache.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_scheduler_cache.erl) ## Function Index ##
latest/2Get the latest assignment from the cache.
list/2Get the assignments for a process.
read/3Get an assignment message from the cache.
read_location/2Read the latest known scheduler location for an address.
write/2Write an assignment message into the cache.
write_location/2Write the latest known scheduler location for an address.
## Function Details ## ### latest/2 ### `latest(ProcID, Opts) -> any()` Get the latest assignment from the cache. ### list/2 ### `list(ProcID, Opts) -> any()` Get the assignments for a process. ### read/3 ### `read(ProcID, Slot, Opts) -> any()` Get an assignment message from the cache. ### read_location/2 ### `read_location(Address, Opts) -> any()` Read the latest known scheduler location for an address. ### write/2 ### `write(Assignment, Opts) -> any()` Write an assignment message into the cache. ### write_location/2 ### `write_location(LocationMsg, Opts) -> any()` Write the latest known scheduler location for an address. --- END OF FILE: docs/resources/source-code/dev_scheduler_cache.md --- --- START OF FILE: docs/resources/source-code/dev_scheduler_formats.md --- # [Module dev_scheduler_formats.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_scheduler_formats.erl) This module is used by dev_scheduler in order to produce outputs that are compatible with various forms of AO clients. ## Description ## It features two main formats: - `application/json` - `application/http` The `application/json` format is a legacy format that is not recommended for new integrations of the AO protocol. ## Function Index ##
aos2_normalize_data/1*The hb_gateway_client module expects all JSON structures to at least have a data field.
aos2_normalize_types/1Normalize an AOS2 formatted message to ensure that all field NAMES and types are correct.
aos2_to_assignment/2Create and normalize an assignment from an AOS2-style JSON structure.
aos2_to_assignments/3Convert an AOS2-style JSON structure to a normalized HyperBEAM assignments response.
assignment_to_aos2/2*Convert an assignment to an AOS2-compatible JSON structure.
assignments_to_aos2/4
assignments_to_bundle/4Generate a GET /schedule response for a process as HTTP-sig bundles.
assignments_to_bundle/5*
cursor/2*Generate a cursor for an assignment.
format_opts/1*For all scheduler format operations, we do not calculate hashpaths, perform cache lookups, or await inprogress results.
## Function Details ## ### aos2_normalize_data/1 * ### `aos2_normalize_data(JSONStruct) -> any()` The `hb_gateway_client` module expects all JSON structures to at least have a `data` field. This function ensures that. ### aos2_normalize_types/1 ### `aos2_normalize_types(Msg) -> any()` Normalize an AOS2 formatted message to ensure that all field NAMES and types are correct. This involves converting field names to integers and specific field names to their canonical form. NOTE: This will result in a message that is not verifiable! It is, however, necessary for gaining compatibility with the AOS2-style scheduling API. ### aos2_to_assignment/2 ### `aos2_to_assignment(A, RawOpts) -> any()` Create and normalize an assignment from an AOS2-style JSON structure. NOTE: This method is destructive to the verifiability of the assignment. ### aos2_to_assignments/3 ### `aos2_to_assignments(ProcID, Body, RawOpts) -> any()` Convert an AOS2-style JSON structure to a normalized HyperBEAM assignments response. ### assignment_to_aos2/2 * ### `assignment_to_aos2(Assignment, RawOpts) -> any()` Convert an assignment to an AOS2-compatible JSON structure. ### assignments_to_aos2/4 ### `assignments_to_aos2(ProcID, Assignments, More, RawOpts) -> any()` ### assignments_to_bundle/4 ### `assignments_to_bundle(ProcID, Assignments, More, Opts) -> any()` Generate a `GET /schedule` response for a process as HTTP-sig bundles. ### assignments_to_bundle/5 * ### `assignments_to_bundle(ProcID, Assignments, More, TimeInfo, RawOpts) -> any()` ### cursor/2 * ### `cursor(Assignment, RawOpts) -> any()` Generate a cursor for an assignment. This should be the slot number, at least in the case of mainnet `ao.N.1` assignments. In the case of legacynet (`ao.TN.1`) assignments, we may want to use the assignment ID. ### format_opts/1 * ### `format_opts(Opts) -> any()` For all scheduler format operations, we do not calculate hashpaths, perform cache lookups, or await inprogress results. --- END OF FILE: docs/resources/source-code/dev_scheduler_formats.md --- --- START OF FILE: docs/resources/source-code/dev_scheduler_registry.md --- # [Module dev_scheduler_registry.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_scheduler_registry.erl) ## Function Index ##
create_and_find_process_test/0*
create_multiple_processes_test/0*
find/1Find a process associated with the processor ID in the local registry If the process is not found, it will not create a new one.
find/2Find a process associated with the processor ID in the local registry If the process is not found and GenIfNotHosted is true, it attemps to create a new one.
find/3Same as find/2 but with additional options passed when spawning a new process (if needed).
find_non_existent_process_test/0*
generate_test_procs/0*
get_all_processes_test/0*
get_processes/0Return a list of all currently registered ProcID.
get_wallet/0
maybe_new_proc/3*
start/0
## Function Details ## ### create_and_find_process_test/0 * ### `create_and_find_process_test() -> any()` ### create_multiple_processes_test/0 * ### `create_multiple_processes_test() -> any()` ### find/1 ### `find(ProcID) -> any()` Find a process associated with the processor ID in the local registry If the process is not found, it will not create a new one ### find/2 ### `find(ProcID, ProcMsgOrFalse) -> any()` Find a process associated with the processor ID in the local registry If the process is not found and `GenIfNotHosted` is true, it attemps to create a new one ### find/3 ### `find(ProcID, ProcMsgOrFalse, Opts) -> any()` Same as `find/2` but with additional options passed when spawning a new process (if needed) ### find_non_existent_process_test/0 * ### `find_non_existent_process_test() -> any()` ### generate_test_procs/0 * ### `generate_test_procs() -> any()` ### get_all_processes_test/0 * ### `get_all_processes_test() -> any()` ### get_processes/0 ### `get_processes() -> any()` Return a list of all currently registered ProcID. ### get_wallet/0 ### `get_wallet() -> any()` ### maybe_new_proc/3 * ### `maybe_new_proc(ProcID, ProcMsg, Opts) -> any()` ### start/0 ### `start() -> any()` --- END OF FILE: docs/resources/source-code/dev_scheduler_registry.md --- --- START OF FILE: docs/resources/source-code/dev_scheduler_server.md --- # [Module dev_scheduler_server.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_scheduler_server.erl) A long-lived server that schedules messages for a process. ## Description ## It acts as a deliberate 'bottleneck' to prevent the server accidentally assigning multiple messages to the same slot. ## Function Index ##
assign/3*Assign a message to the next slot.
commit_assignment/2*Commit to the assignment using all of our appropriate wallets.
commitment_wallets/2*Determine the appropriate list of keys to use to commit assignments for a process.
do_assign/3*Generate and store the actual assignment message.
info/1Get the current slot from the scheduling server.
maybe_inform_recipient/5*Potentially inform the caller that the assignment has been scheduled.
new_proc_test/0*Test the basic functionality of the server.
next_hashchain/3*Create the next element in a chain of hashes that links this and prior assignments.
schedule/2Call the appropriate scheduling server to assign a message.
server/1*The main loop of the server.
start/3Start a scheduling server for a given computation.
stop/1
## Function Details ## ### assign/3 * ### `assign(State, Message, ReplyPID) -> any()` Assign a message to the next slot. ### commit_assignment/2 * ### `commit_assignment(BaseAssignment, State) -> any()` Commit to the assignment using all of our appropriate wallets. ### commitment_wallets/2 * ### `commitment_wallets(ProcMsg, Opts) -> any()` Determine the appropriate list of keys to use to commit assignments for a process. ### do_assign/3 * ### `do_assign(State, Message, ReplyPID) -> any()` Generate and store the actual assignment message. ### info/1 ### `info(ProcID) -> any()` Get the current slot from the scheduling server. ### maybe_inform_recipient/5 * ### `maybe_inform_recipient(Mode, ReplyPID, Message, Assignment, State) -> any()` Potentially inform the caller that the assignment has been scheduled. The main assignment loop calls this function repeatedly at different stages of the assignment process. The scheduling mode determines which stages trigger an update. ### new_proc_test/0 * ### `new_proc_test() -> any()` Test the basic functionality of the server. ### next_hashchain/3 * ### `next_hashchain(HashChain, Message, Opts) -> any()` Create the next element in a chain of hashes that links this and prior assignments. ### schedule/2 ### `schedule(AOProcID, Message) -> any()` Call the appropriate scheduling server to assign a message. ### server/1 * ### `server(State) -> any()` The main loop of the server. Simply waits for messages to assign and returns the current slot. ### start/3 ### `start(ProcID, Proc, Opts) -> any()` Start a scheduling server for a given computation. ### stop/1 ### `stop(ProcID) -> any()` --- END OF FILE: docs/resources/source-code/dev_scheduler_server.md --- --- START OF FILE: docs/resources/source-code/dev_scheduler.md --- # [Module dev_scheduler.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_scheduler.erl) A simple scheduler scheme for AO. ## Description ## This device expects a message of the form: Process: `#{ id, Scheduler: #{ Authority } }` ``` It exposes the following keys for scheduling:#{ method: GET, path: <<"/info">> } -> Returns information about the scheduler.#{ method: GET, path: <<"/slot">> } -> slot(Msg1, Msg2, Opts) Returns the current slot for a process.#{ method: GET, path: <<"/schedule">> } -> get_schedule(Msg1, Msg2, Opts) Returns the schedule for a process in a cursor-traversable format.#{ method: POST, path: <<"/schedule">> } -> post_schedule(Msg1, Msg2, Opts) Schedules a new message for a process, or starts a new scheduler for the given message. ``` ## Function Index ##
benchmark_suite/2*
benchmark_suite_test_/0*
cache_remote_schedule/2*Cache a schedule received from a remote scheduler.
check_lookahead_and_local_cache/4*Check if we have a result from a lookahead worker or from our local cache.
checkpoint/1Returns the current state of the scheduler.
do_get_remote_schedule/6*Get a schedule from a remote scheduler, unless we already have already read all of the assignments from the local cache.
do_post_schedule/4*Post schedule the message.
filter_json_assignments/4*Filter JSON assignment results from a remote legacy scheduler.
find_message_to_schedule/3*Search the given base and request message pair to find the message to schedule.
find_next_assignment/5*Get the assignments for a process from the message cache, local cache, or the inbox (thanks to a lookahead-worker).
find_process_message/4*Find the process message for a given process ID and base message.
find_remote_scheduler/3*Use the SchedulerLocation to find the remote path and return a redirect.
find_server/3*Locate the correct scheduling server for a given process.
find_server/4*
find_target_id/3*Find the schedule ID from a given request.
generate_local_schedule/5*Generate a GET /schedule response for a process.
generate_redirect/3*Generate a redirect message to a scheduler.
get_hint/2*If a hint is present in the string, return it.
get_local_assignments/4*Get the assignments for a process, and whether the request was truncated.
get_local_schedule_test/0*
get_location/3*Search for the location of the scheduler in the scheduler-location cache.
get_remote_schedule/5*Get a schedule from a remote scheduler, but first read all of the assignments from the local cache that we already know about.
get_schedule/3*Generate and return a schedule for a process, optionally between two slots -- labelled as from and to.
http_get_json_schedule_test_/0*
http_get_legacy_schedule_as_aos2_test_/0*
http_get_legacy_schedule_slot_range_test_/0*
http_get_legacy_schedule_test_/0*
http_get_legacy_slot_test_/0*
http_get_schedule/4*
http_get_schedule/5*
http_get_schedule_redirect/0*
http_get_schedule_redirect_test_/0*
http_get_schedule_test_/0*
http_get_slot/2*
http_init/0*
http_init/1*
http_post_legacy_schedule_test_/0*
http_post_schedule/0*
http_post_schedule_sign/4*
http_post_schedule_test_/0*
info/0This device uses a default_handler to route requests to the correct function.
is_local_scheduler/4*Determine if a scheduler is local.
location/3Router for record requests.
many_clients/1*
message_cached_assignments/2*Non-device exported helper to get the cached assignments held in a process.
next/3Load the schedule for a process into the cache, then return the next assignment.
node_from_redirect/2*Get the node URL from a redirect.
parse_schedulers/1General utility functions that are available to other modules.
post_legacy_schedule/4*
post_location/3*Generate a new scheduler location record and register it.
post_remote_schedule/4*
post_schedule/3*Schedules a new message on the SU.
read_local_assignments/4*Get the assignments for a process.
redirect_from_graphql/0*
redirect_from_graphql_test_/0*
redirect_to_hint_test/0*
register_location_on_boot_test/0*Test that a scheduler location is registered on boot.
register_new_process_test/0*
register_scheduler_test/0*
remote_slot/3*Get the current slot from a remote scheduler.
remote_slot/4*Get the current slot from a remote scheduler, based on the variant of the process's scheduler.
router/4The default handler for the scheduler device.
schedule/3A router for choosing between getting the existing schedule, or scheduling a new message.
schedule_message_and_get_slot_test/0*
single_resolution/1*
slot/3Returns information about the current slot for a process.
spawn_lookahead_worker/3*Spawn a new Erlang process to fetch the next assignments from the local cache, if we have them available.
start/0Helper to ensure that the environment is started.
status/3Returns information about the entire scheduler.
status_test/0*
test_process/0Generate a _transformed_ process message, not as they are generated by users.
test_process/1*
validate_next_slot/5*Validate the next slot generated by find_next_assignment.
without_hint/1*Take a process ID or target with a potential hint and return just the process ID.
## Function Details ## ### benchmark_suite/2 * ### `benchmark_suite(Port, Base) -> any()` ### benchmark_suite_test_/0 * ### `benchmark_suite_test_() -> any()` ### cache_remote_schedule/2 * ### `cache_remote_schedule(Schedule, Opts) -> any()` Cache a schedule received from a remote scheduler. ### check_lookahead_and_local_cache/4 * ### `check_lookahead_and_local_cache(Msg1, ProcID, TargetSlot, Opts) -> any()` Check if we have a result from a lookahead worker or from our local cache. If we have a result in the local cache, we may also start a new lookahead worker to fetch the next assignments if we have them locally, ahead of time. This can be enabled/disabled with the `scheduler_lookahead` option. ### checkpoint/1 ### `checkpoint(State) -> any()` Returns the current state of the scheduler. ### do_get_remote_schedule/6 * ### `do_get_remote_schedule(ProcID, LocalAssignments, From, To, Redirect, Opts) -> any()` Get a schedule from a remote scheduler, unless we already have already read all of the assignments from the local cache. ### do_post_schedule/4 * ### `do_post_schedule(ProcID, PID, Msg2, Opts) -> any()` Post schedule the message. `Msg2` by this point has been refined to only committed keys, and to only include the `target` message that is to be scheduled. ### filter_json_assignments/4 * ### `filter_json_assignments(JSONRes, To, From, Opts) -> any()` Filter JSON assignment results from a remote legacy scheduler. ### find_message_to_schedule/3 * ### `find_message_to_schedule(Msg1, Msg2, Opts) -> any()` Search the given base and request message pair to find the message to schedule. The precidence order for search is as follows: 1. A key in `Msg2` with the value `self`, indicating that the entire message is the subject. 2. A key in `Msg2` with another value, present in that message. 3. The body of the message. 4. The message itself. ### find_next_assignment/5 * ### `find_next_assignment(Msg1, Msg2, Schedule, LastSlot, Opts) -> any()` Get the assignments for a process from the message cache, local cache, or the inbox (thanks to a lookahead-worker). ### find_process_message/4 * ### `find_process_message(ProcID, Msg1, ToSched, Opts) -> any()` Find the process message for a given process ID and base message. ### find_remote_scheduler/3 * ### `find_remote_scheduler(ProcID, Rest, Opts) -> any()` Use the SchedulerLocation to find the remote path and return a redirect. If there are multiple locations, try each one in turn until we find the first that matches. ### find_server/3 * ### `find_server(ProcID, Msg1, Opts) -> any()` Locate the correct scheduling server for a given process. ### find_server/4 * ### `find_server(ProcID, Msg1, ToSched, Opts) -> any()` ### find_target_id/3 * ### `find_target_id(Msg1, Msg2, Opts) -> any()` Find the schedule ID from a given request. The precidence order for search is as follows: [1. `ToSched/id` -- in the case of `POST schedule`, handled locally] 2. `Msg2/target` 3. `Msg2/id` when `Msg2` has `type: Process` 4. `Msg1/process/id` 5. `Msg1/id` when `Msg1` has `type: Process` 6. `Msg2/id` ### generate_local_schedule/5 * ### `generate_local_schedule(Format, ProcID, From, To, Opts) -> any()` Generate a `GET /schedule` response for a process. ### generate_redirect/3 * ### `generate_redirect(ProcID, SchedulerLocation, Opts) -> any()` Generate a redirect message to a scheduler. ### get_hint/2 * ### `get_hint(Str, Opts) -> any()` If a hint is present in the string, return it. Else, return not_found. ### get_local_assignments/4 * ### `get_local_assignments(ProcID, From, RequestedTo, Opts) -> any()` Get the assignments for a process, and whether the request was truncated. ### get_local_schedule_test/0 * ### `get_local_schedule_test() -> any()` ### get_location/3 * ### `get_location(Msg1, Req, Opts) -> any()` Search for the location of the scheduler in the scheduler-location cache. If an address is provided, we search for the location of that specific scheduler. Otherwise, we return the location record for the current node's scheduler, if it has been established. ### get_remote_schedule/5 * ### `get_remote_schedule(RawProcID, From, To, Redirect, Opts) -> any()` Get a schedule from a remote scheduler, but first read all of the assignments from the local cache that we already know about. ### get_schedule/3 * ### `get_schedule(Msg1, Msg2, Opts) -> any()` Generate and return a schedule for a process, optionally between two slots -- labelled as `from` and `to`. If the schedule is not local, we redirect to the remote scheduler or proxy based on the node opts. ### http_get_json_schedule_test_/0 * ### `http_get_json_schedule_test_() -> any()` ### http_get_legacy_schedule_as_aos2_test_/0 * ### `http_get_legacy_schedule_as_aos2_test_() -> any()` ### http_get_legacy_schedule_slot_range_test_/0 * ### `http_get_legacy_schedule_slot_range_test_() -> any()` ### http_get_legacy_schedule_test_/0 * ### `http_get_legacy_schedule_test_() -> any()` ### http_get_legacy_slot_test_/0 * ### `http_get_legacy_slot_test_() -> any()` ### http_get_schedule/4 * ### `http_get_schedule(N, PMsg, From, To) -> any()` ### http_get_schedule/5 * ### `http_get_schedule(N, PMsg, From, To, Format) -> any()` ### http_get_schedule_redirect/0 * ### `http_get_schedule_redirect() -> any()` ### http_get_schedule_redirect_test_/0 * ### `http_get_schedule_redirect_test_() -> any()` ### http_get_schedule_test_/0 * ### `http_get_schedule_test_() -> any()` ### http_get_slot/2 * ### `http_get_slot(N, PMsg) -> any()` ### http_init/0 * ### `http_init() -> any()` ### http_init/1 * ### `http_init(Opts) -> any()` ### http_post_legacy_schedule_test_/0 * ### `http_post_legacy_schedule_test_() -> any()` ### http_post_schedule/0 * ### `http_post_schedule() -> any()` ### http_post_schedule_sign/4 * ### `http_post_schedule_sign(Node, Msg, ProcessMsg, Wallet) -> any()` ### http_post_schedule_test_/0 * ### `http_post_schedule_test_() -> any()` ### info/0 ### `info() -> any()` This device uses a default_handler to route requests to the correct function. ### is_local_scheduler/4 * ### `is_local_scheduler(ProcID, ProcMsg, Rest, Opts) -> any()` Determine if a scheduler is local. If so, return the PID and options. We start the local server if we _can_ be the scheduler and it does not already exist. ### location/3 ### `location(Msg1, Msg2, Opts) -> any()` Router for `record` requests. Expects either a `POST` or `GET` request. ### many_clients/1 * ### `many_clients(Opts) -> any()` ### message_cached_assignments/2 * ### `message_cached_assignments(Msg, Opts) -> any()` Non-device exported helper to get the cached assignments held in a process. ### next/3 ### `next(Msg1, Msg2, Opts) -> any()` Load the schedule for a process into the cache, then return the next assignment. Assumes that Msg1 is a `dev_process` or similar message, having a `Current-Slot` key. It stores a local cache of the schedule in the `priv/To-Process` key. ### node_from_redirect/2 * ### `node_from_redirect(Redirect, Opts) -> any()` Get the node URL from a redirect. ### parse_schedulers/1 ### `parse_schedulers(SchedLoc) -> any()` General utility functions that are available to other modules. ### post_legacy_schedule/4 * ### `post_legacy_schedule(ProcID, OnlyCommitted, Node, Opts) -> any()` ### post_location/3 * ### `post_location(Msg1, RawReq, Opts) -> any()` Generate a new scheduler location record and register it. We both send the new scheduler-location to the given registry, and return it to the caller. ### post_remote_schedule/4 * ### `post_remote_schedule(RawProcID, Redirect, OnlyCommitted, Opts) -> any()` ### post_schedule/3 * ### `post_schedule(Msg1, Msg2, Opts) -> any()` Schedules a new message on the SU. Searches Msg1 for the appropriate ID, then uses the wallet address of the scheduler to determine if the message is for this scheduler. If so, it schedules the message and returns the assignment. ### read_local_assignments/4 * ### `read_local_assignments(ProcID, From, To, Opts) -> any()` Get the assignments for a process. ### redirect_from_graphql/0 * ### `redirect_from_graphql() -> any()` ### redirect_from_graphql_test_/0 * ### `redirect_from_graphql_test_() -> any()` ### redirect_to_hint_test/0 * ### `redirect_to_hint_test() -> any()` ### register_location_on_boot_test/0 * ### `register_location_on_boot_test() -> any()` Test that a scheduler location is registered on boot. ### register_new_process_test/0 * ### `register_new_process_test() -> any()` ### register_scheduler_test/0 * ### `register_scheduler_test() -> any()` ### remote_slot/3 * ### `remote_slot(ProcID, Redirect, Opts) -> any()` Get the current slot from a remote scheduler. ### remote_slot/4 * ### `remote_slot(X1, ProcID, Node, Opts) -> any()` Get the current slot from a remote scheduler, based on the variant of the process's scheduler. ### router/4 ### `router(X1, Msg1, Msg2, Opts) -> any()` The default handler for the scheduler device. ### schedule/3 ### `schedule(Msg1, Msg2, Opts) -> any()` A router for choosing between getting the existing schedule, or scheduling a new message. ### schedule_message_and_get_slot_test/0 * ### `schedule_message_and_get_slot_test() -> any()` ### single_resolution/1 * ### `single_resolution(Opts) -> any()` ### slot/3 ### `slot(M1, M2, Opts) -> any()` Returns information about the current slot for a process. ### spawn_lookahead_worker/3 * ### `spawn_lookahead_worker(ProcID, Slot, Opts) -> any()` Spawn a new Erlang process to fetch the next assignments from the local cache, if we have them available. ### start/0 ### `start() -> any()` Helper to ensure that the environment is started. ### status/3 ### `status(M1, M2, Opts) -> any()` Returns information about the entire scheduler. ### status_test/0 * ### `status_test() -> any()` ### test_process/0 ### `test_process() -> any()` Generate a _transformed_ process message, not as they are generated by users. See `dev_process` for examples of AO process messages. ### test_process/1 * ### `test_process(Address) -> any()` ### validate_next_slot/5 * ### `validate_next_slot(Msg1, Assignments, Lookahead, Last, Opts) -> any()` Validate the `next` slot generated by `find_next_assignment`. ### without_hint/1 * ### `without_hint(Target) -> any()` Take a process ID or target with a potential hint and return just the process ID. --- END OF FILE: docs/resources/source-code/dev_scheduler.md --- --- START OF FILE: docs/resources/source-code/dev_simple_pay.md --- # [Module dev_simple_pay.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_simple_pay.erl) A simple device that allows the operator to specify a price for a request and then charge the user for it, on a per message basis. ## Description ## The device's ledger is stored in the node message at `simple_pay_ledger`, and can be topped-up by either the operator, or an external device. The price is specified in the node message at `simple_pay_price`. This device acts as both a pricing device and a ledger device, by p4's definition. ## Function Index ##
balance/3Get the balance of a user in the ledger.
charge/3Preprocess a request by checking the ledger and charging the user.
estimate/3Estimate the cost of a request by counting the number of messages in the request, then multiplying by the per-message price.
get_balance/2*Get the balance of a user in the ledger.
get_balance_and_top_up_test/0*
is_operator/2*Check if the request is from the operator.
is_operator/3*
set_balance/3*Adjust a user's balance, normalizing their wallet ID first.
test_opts/1*
topup/3Top up the user's balance in the ledger.
## Function Details ## ### balance/3 ### `balance(X1, RawReq, NodeMsg) -> any()` Get the balance of a user in the ledger. ### charge/3 ### `charge(X1, RawReq, NodeMsg) -> any()` Preprocess a request by checking the ledger and charging the user. We can charge the user at this stage because we know statically what the price will be ### estimate/3 ### `estimate(X1, EstimateReq, NodeMsg) -> any()` Estimate the cost of a request by counting the number of messages in the request, then multiplying by the per-message price. The operator does not pay for their own requests. ### get_balance/2 * ### `get_balance(Signer, NodeMsg) -> any()` Get the balance of a user in the ledger. ### get_balance_and_top_up_test/0 * ### `get_balance_and_top_up_test() -> any()` ### is_operator/2 * ### `is_operator(Req, NodeMsg) -> any()` Check if the request is from the operator. ### is_operator/3 * ### `is_operator(Req, NodeMsg, OperatorAddr) -> any()` ### set_balance/3 * ### `set_balance(Signer, Amount, NodeMsg) -> any()` Adjust a user's balance, normalizing their wallet ID first. ### test_opts/1 * ### `test_opts(Ledger) -> any()` ### topup/3 ### `topup(X1, Req, NodeMsg) -> any()` Top up the user's balance in the ledger. --- END OF FILE: docs/resources/source-code/dev_simple_pay.md --- --- START OF FILE: docs/resources/source-code/dev_snp_nif.md --- # [Module dev_snp_nif.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_snp_nif.erl) ## Function Index ##
check_snp_support/0
compute_launch_digest/1
compute_launch_digest_test/0*
generate_attestation_report/2
generate_attestation_report_test/0*
init/0*
not_loaded/1*
verify_measurement/2
verify_measurement_test/0*
verify_signature/1
verify_signature_test/0*
## Function Details ## ### check_snp_support/0 ### `check_snp_support() -> any()` ### compute_launch_digest/1 ### `compute_launch_digest(Args) -> any()` ### compute_launch_digest_test/0 * ### `compute_launch_digest_test() -> any()` ### generate_attestation_report/2 ### `generate_attestation_report(UniqueData, VMPL) -> any()` ### generate_attestation_report_test/0 * ### `generate_attestation_report_test() -> any()` ### init/0 * ### `init() -> any()` ### not_loaded/1 * ### `not_loaded(Line) -> any()` ### verify_measurement/2 ### `verify_measurement(Report, Expected) -> any()` ### verify_measurement_test/0 * ### `verify_measurement_test() -> any()` ### verify_signature/1 ### `verify_signature(Report) -> any()` ### verify_signature_test/0 * ### `verify_signature_test() -> any()` --- END OF FILE: docs/resources/source-code/dev_snp_nif.md --- --- START OF FILE: docs/resources/source-code/dev_snp.md --- # [Module dev_snp.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_snp.erl) This device offers an interface for validating AMD SEV-SNP commitments, as well as generating them, if called in an appropriate environment. ## Function Index ##
execute_is_trusted/3*Ensure that all of the software hashes are trusted.
generate/3Generate an commitment report and emit it as a message, including all of the necessary data to generate the nonce (ephemeral node address + node message ID), as well as the expected measurement (firmware, kernel, and VMSAs hashes).
generate_nonce/2*Generate the nonce to use in the commitment report.
is_debug/1*Ensure that the node's debug policy is disabled.
real_node_test/0*
report_data_matches/3*Ensure that the report data matches the expected report data.
trusted/3Validates if a given message parameter matches a trusted value from the SNP trusted list Returns {ok, true} if the message is trusted, {ok, false} otherwise.
verify/3Verify an commitment report message; validating the identity of a remote node, its ephemeral private address, and the integrity of the report.
## Function Details ## ### execute_is_trusted/3 * ### `execute_is_trusted(M1, Msg, NodeOpts) -> any()` Ensure that all of the software hashes are trusted. The caller may set a specific device to use for the `is-trusted` key. The device must then implement the `trusted` resolver. ### generate/3 ### `generate(M1, M2, Opts) -> any()` Generate an commitment report and emit it as a message, including all of the necessary data to generate the nonce (ephemeral node address + node message ID), as well as the expected measurement (firmware, kernel, and VMSAs hashes). ### generate_nonce/2 * ### `generate_nonce(RawAddress, RawNodeMsgID) -> any()` Generate the nonce to use in the commitment report. ### is_debug/1 * ### `is_debug(Report) -> any()` Ensure that the node's debug policy is disabled. ### real_node_test/0 * ### `real_node_test() -> any()` ### report_data_matches/3 * ### `report_data_matches(Address, NodeMsgID, ReportData) -> any()` Ensure that the report data matches the expected report data. ### trusted/3 ### `trusted(Msg1, Msg2, NodeOpts) -> any()` Validates if a given message parameter matches a trusted value from the SNP trusted list Returns {ok, true} if the message is trusted, {ok, false} otherwise ### verify/3 ### `verify(M1, M2, NodeOpts) -> any()` Verify an commitment report message; validating the identity of a remote node, its ephemeral private address, and the integrity of the report. The checks that must be performed to validate the report are: 1. Verify the address and the node message ID are the same as the ones used to generate the nonce. 2. Verify the address that signed the message is the same as the one used to generate the nonce. 3. Verify that the debug flag is disabled. 4. Verify that the firmware, kernel, and OS (VMSAs) hashes, part of the measurement, are trusted. 5. Verify the measurement is valid. 6. Verify the report's certificate chain to hardware root of trust. --- END OF FILE: docs/resources/source-code/dev_snp.md --- --- START OF FILE: docs/resources/source-code/dev_stack.md --- # [Module dev_stack.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_stack.erl) A device that contains a stack of other devices, and manages their execution. ## Description ## It can run in two modes: fold (the default), and map. In fold mode, it runs upon input messages in the order of their keys. A stack maintains and passes forward a state (expressed as a message) as it progresses through devices. For example, a stack of devices as follows: ``` Device -> Stack Device-Stack/1/Name -> Add-One-Device Device-Stack/2/Name -> Add-Two-Device ``` When called with the message: ``` #{ Path = "FuncName", binary => <<"0">> } ``` Will produce the output: ``` #{ Path = "FuncName", binary => <<"3">> } {ok, #{ bin => <<"3">> }} ``` In map mode, the stack will run over all the devices in the stack, and combine their results into a single message. Each of the devices' output values have a key that is the device's name in the `Device-Stack` (its number if the stack is a list). You can switch between fold and map modes by setting the `Mode` key in the `Msg2` to either `Fold` or `Map`, or set it globally for the stack by setting the `Mode` key in the `Msg1` message. The key in `Msg2` takes precedence over the key in `Msg1`. The key that is called upon the device stack is the same key that is used upon the devices that are contained within it. For example, in the above scenario we resolve FuncName on the stack, leading FuncName to be called on Add-One-Device and Add-Two-Device. A device stack responds to special statuses upon responses as follows: `skip`: Skips the rest of the device stack for the current pass. `pass`: Causes the stack to increment its pass number and re-execute the stack from the first device, maintaining the state accumulated so far. Only available in fold mode. In all cases, the device stack will return the accumulated state to the caller as the result of the call to the stack. The dev_stack adds additional metadata to the message in order to track the state of its execution as it progresses through devices. These keys are as follows: `Stack-Pass`: The number of times the stack has reset and re-executed from the first device for the current message. `Input-Prefix`: The prefix that the device should use for its outputs and inputs. `Output-Prefix`: The device that was previously executed. All counters used by the stack are initialized to 1. Additionally, as implemented in HyperBEAM, the device stack will honor a number of options that are passed to it as keys in the message. Each of these options is also passed through to the devices contained within the stack during execution. These options include: `Error-Strategy`: Determines how the stack handles errors from devices. See `maybe_error/5` for more information. `Allow-Multipass`: Determines whether the stack is allowed to automatically re-execute from the first device when the `pass` tag is returned. See `maybe_pass/3` for more information. Under-the-hood, dev_stack uses a `default` handler to resolve all calls to devices, aside `set/2` which it calls itself to mutate the message's `device` key in order to change which device is currently being executed. This method allows dev_stack to ensure that the message's HashPath is always correct, even as it delegates calls to other devices. An example flow for a `dev_stack` execution is as follows: ``` /Msg1/AlicesExcitingKey -> dev_stack:execute -> /Msg1/Set?device=/Device-Stack/1 -> /Msg2/AlicesExcitingKey -> /Msg3/Set?device=/Device-Stack/2 -> /Msg4/AlicesExcitingKey ... -> /MsgN/Set?device=[This-Device] -> returns {ok, /MsgN+1} -> /MsgN+1 ``` In this example, the `device` key is mutated a number of times, but the resulting HashPath remains correct and verifiable. ## Function Index ##
benchmark_test/0*
example_device_for_stack_test/0*
generate_append_device/1
generate_append_device/2*
increment_pass/2*Helper to increment the pass number.
info/1
input_and_output_prefixes_test/0*
input_output_prefixes_passthrough_test/0*
input_prefix/3Return the input prefix for the stack.
many_devices_test/0*
maybe_error/5*
no_prefix_test/0*
not_found_test/0*
output_prefix/3Return the output prefix for the stack.
output_prefix_test/0*
pass_test/0*
prefix/3Return the default prefix for the stack.
reinvocation_test/0*
resolve_fold/3*The main device stack execution engine.
resolve_fold/4*
resolve_map/3*Map over the devices in the stack, accumulating the output in a single message of keys and values, where keys are the same as the keys in the original message (typically a number).
router/3*
router/4The device stack key router.
simple_map_test/0*
simple_stack_execute_test/0*
skip_test/0*
test_prefix_msg/0*
transform/3*Return Message1, transformed such that the device named Key from the Device-Stack key in the message takes the place of the original Device key.
transform_external_call_device_test/0*Ensure we can generate a transformer message that can be called to return a version of msg1 with only that device attached.
transform_internal_call_device_test/0*Test that the transform function can be called correctly internally by other functions in the module.
transformer_message/2*Return a message which, when given a key, will transform the message such that the device named Key from the Device-Stack key in the message takes the place of the original Device key.
## Function Details ## ### benchmark_test/0 * ### `benchmark_test() -> any()` ### example_device_for_stack_test/0 * ### `example_device_for_stack_test() -> any()` ### generate_append_device/1 ### `generate_append_device(Separator) -> any()` ### generate_append_device/2 * ### `generate_append_device(Separator, Status) -> any()` ### increment_pass/2 * ### `increment_pass(Message, Opts) -> any()` Helper to increment the pass number. ### info/1 ### `info(Msg) -> any()` ### input_and_output_prefixes_test/0 * ### `input_and_output_prefixes_test() -> any()` ### input_output_prefixes_passthrough_test/0 * ### `input_output_prefixes_passthrough_test() -> any()` ### input_prefix/3 ### `input_prefix(Msg1, Msg2, Opts) -> any()` Return the input prefix for the stack. ### many_devices_test/0 * ### `many_devices_test() -> any()` ### maybe_error/5 * ### `maybe_error(Message1, Message2, DevNum, Info, Opts) -> any()` ### no_prefix_test/0 * ### `no_prefix_test() -> any()` ### not_found_test/0 * ### `not_found_test() -> any()` ### output_prefix/3 ### `output_prefix(Msg1, Msg2, Opts) -> any()` Return the output prefix for the stack. ### output_prefix_test/0 * ### `output_prefix_test() -> any()` ### pass_test/0 * ### `pass_test() -> any()` ### prefix/3 ### `prefix(Msg1, Msg2, Opts) -> any()` Return the default prefix for the stack. ### reinvocation_test/0 * ### `reinvocation_test() -> any()` ### resolve_fold/3 * ### `resolve_fold(Message1, Message2, Opts) -> any()` The main device stack execution engine. See the moduledoc for more information. ### resolve_fold/4 * ### `resolve_fold(Message1, Message2, DevNum, Opts) -> any()` ### resolve_map/3 * ### `resolve_map(Message1, Message2, Opts) -> any()` Map over the devices in the stack, accumulating the output in a single message of keys and values, where keys are the same as the keys in the original message (typically a number). ### router/3 * ### `router(Message1, Message2, Opts) -> any()` ### router/4 ### `router(Key, Message1, Message2, Opts) -> any()` The device stack key router. Sends the request to `resolve_stack`, except for `set/2` which is handled by the default implementation in `dev_message`. ### simple_map_test/0 * ### `simple_map_test() -> any()` ### simple_stack_execute_test/0 * ### `simple_stack_execute_test() -> any()` ### skip_test/0 * ### `skip_test() -> any()` ### test_prefix_msg/0 * ### `test_prefix_msg() -> any()` ### transform/3 * ### `transform(Msg1, Key, Opts) -> any()` Return Message1, transformed such that the device named `Key` from the `Device-Stack` key in the message takes the place of the original `Device` key. This transformation allows dev_stack to correctly track the HashPath of the message as it delegates execution to devices contained within it. ### transform_external_call_device_test/0 * ### `transform_external_call_device_test() -> any()` Ensure we can generate a transformer message that can be called to return a version of msg1 with only that device attached. ### transform_internal_call_device_test/0 * ### `transform_internal_call_device_test() -> any()` Test that the transform function can be called correctly internally by other functions in the module. ### transformer_message/2 * ### `transformer_message(Msg1, Opts) -> any()` Return a message which, when given a key, will transform the message such that the device named `Key` from the `Device-Stack` key in the message takes the place of the original `Device` key. This allows users to call a single device from the stack: /Msg1/Transform/DeviceName/keyInDevice -> keyInDevice executed on DeviceName against Msg1. --- END OF FILE: docs/resources/source-code/dev_stack.md --- --- START OF FILE: docs/resources/source-code/dev_test.md --- # [Module dev_test.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_test.erl) ## Function Index ##
compute/3Example implementation of a compute handler.
compute_test/0*
delay/3Does nothing, just sleeps Req/duration or 750 ms and returns the appropriate form in order to be used as a hook.
device_with_function_key_module_test/0*Tests the resolution of a default function.
increment_counter/3Find a test worker's PID and send it an increment message.
index/3Example index handler.
info/1Exports a default_handler function that can be used to test the handler resolution mechanism.
info/3Exports a default_handler function that can be used to test the handler resolution mechanism.
init/3Example init/3 handler.
load/3Return a message with the device set to this module.
mul/2Example implementation of an imported function for a WASM executor.
postprocess/3Set the postprocessor-called key to true in the HTTP server.
restore/3Example restore/3 handler.
restore_test/0*
snapshot/3Do nothing when asked to snapshot.
test_func/1
update_state/3Find a test worker's PID and send it an update message.
## Function Details ## ### compute/3 ### `compute(Msg1, Msg2, Opts) -> any()` Example implementation of a `compute` handler. Makes a running list of the slots that have been computed in the state message and places the new slot number in the results key. ### compute_test/0 * ### `compute_test() -> any()` ### delay/3 ### `delay(Msg1, Req, Opts) -> any()` Does nothing, just sleeps `Req/duration or 750` ms and returns the appropriate form in order to be used as a hook. ### device_with_function_key_module_test/0 * ### `device_with_function_key_module_test() -> any()` Tests the resolution of a default function. ### increment_counter/3 ### `increment_counter(Msg1, Msg2, Opts) -> any()` Find a test worker's PID and send it an increment message. ### index/3 ### `index(Msg, Req, Opts) -> any()` Example index handler. ### info/1 ### `info(X1) -> any()` Exports a default_handler function that can be used to test the handler resolution mechanism. ### info/3 ### `info(Msg1, Msg2, Opts) -> any()` Exports a default_handler function that can be used to test the handler resolution mechanism. ### init/3 ### `init(Msg, Msg2, Opts) -> any()` Example `init/3` handler. Sets the `Already-Seen` key to an empty list. ### load/3 ### `load(Base, X2, Opts) -> any()` Return a message with the device set to this module. ### mul/2 ### `mul(Msg1, Msg2) -> any()` Example implementation of an `imported` function for a WASM executor. ### postprocess/3 ### `postprocess(Msg, X2, Opts) -> any()` Set the `postprocessor-called` key to true in the HTTP server. ### restore/3 ### `restore(Msg, Msg2, Opts) -> any()` Example `restore/3` handler. Sets the hidden key `Test/Started` to the value of `Current-Slot` and checks whether the `Already-Seen` key is valid. ### restore_test/0 * ### `restore_test() -> any()` ### snapshot/3 ### `snapshot(Msg1, Msg2, Opts) -> any()` Do nothing when asked to snapshot. ### test_func/1 ### `test_func(X1) -> any()` ### update_state/3 ### `update_state(Msg, Msg2, Opts) -> any()` Find a test worker's PID and send it an update message. --- END OF FILE: docs/resources/source-code/dev_test.md --- --- START OF FILE: docs/resources/source-code/dev_volume.md --- # [Module dev_volume.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_volume.erl) Secure Volume Management for HyperBEAM Nodes. ## Description ## This module handles encrypted storage operations for HyperBEAM, providing a robust and secure approach to data persistence. It manages the complete lifecycle of encrypted volumes from detection to creation, formatting, and mounting. Key responsibilities: - Volume detection and initialization - Encrypted partition creation and formatting - Secure mounting using cryptographic keys - Store path reconfiguration to use mounted volumes - Automatic handling of various system states (new device, existing partition, etc.) The primary entry point is the `mount/3` function, which orchestrates the entire process based on the provided configuration parameters. This module works alongside `hb_volume` which provides the low-level operations for device manipulation. Security considerations: - Ensures data at rest is protected through LUKS encryption - Provides proper volume sanitization and secure mounting - IMPORTANT: This module only applies configuration set in node options and does NOT accept disk operations via HTTP requests. It cannot format arbitrary disks as all operations are safeguarded by host operating system permissions enforced upon the HyperBEAM environment. ## Function Index ##
check_base_device/8*Check if the base device exists and if it does, check if the partition exists.
check_partition/8*Check if the partition exists.
create_and_mount_partition/8*Create, format and mount a new partition.
decrypt_volume_key/2*Decrypts an encrypted volume key using the node's private key.
format_and_mount/6*Format and mount a newly created partition.
info/1Exported function for getting device info, controls which functions are exposed via the device API.
info/3HTTP info response providing information about this device.
mount/3Handles the complete process of secure encrypted volume mounting.
mount_existing_partition/6*Mount an existing partition.
mount_formatted_partition/6*Mount a newly formatted partition.
public_key/3Returns the node's public key for secure key exchange.
update_node_config/2*Update the node's configuration with the new store.
update_store_path/2*Update the store path to use the mounted volume.
## Function Details ## ### check_base_device/8 * ###

check_base_device(Device::term(), Partition::term(), PartitionType::term(), VolumeName::term(), MountPoint::term(), StorePath::term(), Key::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Device`: The base device to check.
`Partition`: The partition to check.
`PartitionType`: The type of partition to check.
`VolumeName`: The name of the volume to check.
`MountPoint`: The mount point to check.
`StorePath`: The store path to check.
`Key`: The key to check.
`Opts`: The options to check.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Check if the base device exists and if it does, check if the partition exists. ### check_partition/8 * ###

check_partition(Device::term(), Partition::term(), PartitionType::term(), VolumeName::term(), MountPoint::term(), StorePath::term(), Key::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Device`: The base device to check.
`Partition`: The partition to check.
`PartitionType`: The type of partition to check.
`VolumeName`: The name of the volume to check.
`MountPoint`: The mount point to check.
`StorePath`: The store path to check.
`Key`: The key to check.
`Opts`: The options to check.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Check if the partition exists. If it does, attempt to mount it. If it doesn't exist, create it, format it with encryption and mount it. ### create_and_mount_partition/8 * ###

create_and_mount_partition(Device::term(), Partition::term(), PartitionType::term(), Key::term(), MountPoint::term(), VolumeName::term(), StorePath::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Device`: The device to create the partition on.
`Partition`: The partition to create.
`PartitionType`: The type of partition to create.
`Key`: The key to create the partition with.
`MountPoint`: The mount point to mount the partition to.
`VolumeName`: The name of the volume to mount.
`StorePath`: The store path to mount.
`Opts`: The options to mount.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Create, format and mount a new partition. ### decrypt_volume_key/2 * ###

decrypt_volume_key(EncryptedKeyBase64::binary(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Opts`: A map of configuration options.
returns: `{ok, DecryptedKey}` on successful decryption, or `{error, Binary}` if decryption fails. Decrypts an encrypted volume key using the node's private key. This function takes an encrypted key (typically sent by a client who encrypted it with the node's public key) and decrypts it using the node's private RSA key. ### format_and_mount/6 * ###

format_and_mount(Partition::term(), Key::term(), MountPoint::term(), VolumeName::term(), StorePath::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Partition`: The partition to format and mount.
`Key`: The key to format and mount the partition with.
`MountPoint`: The mount point to mount the partition to.
`VolumeName`: The name of the volume to mount.
`StorePath`: The store path to mount.
`Opts`: The options to mount.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Format and mount a newly created partition. ### info/1 ### `info(X1) -> any()` Exported function for getting device info, controls which functions are exposed via the device API. ### info/3 ### `info(Msg1, Msg2, Opts) -> any()` HTTP info response providing information about this device ### mount/3 ###

mount(M1::term(), M2::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`M1`: Base message for context.
`M2`: Request message with operation details.
`Opts`: A map of configuration options for volume operations.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Handles the complete process of secure encrypted volume mounting. This function performs the following operations depending on the state: 1. Validates the encryption key is present 2. Checks if the base device exists 3. Checks if the partition exists on the device 4. If the partition exists, attempts to mount it 5. If the partition doesn't exist, creates it, formats it with encryption and mounts it 6. Updates the node's store configuration to use the mounted volume Config options in Opts map: - priv_volume_key: (Required) The encryption key - volume_device: Base device path - volume_partition: Partition path - volume_partition_type: Filesystem type - volume_name: Name for encrypted volume - volume_mount_point: Where to mount - volume_store_path: Store path on volume ### mount_existing_partition/6 * ###

mount_existing_partition(Partition::term(), Key::term(), MountPoint::term(), VolumeName::term(), StorePath::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Partition`: The partition to mount.
`Key`: The key to mount.
`MountPoint`: The mount point to mount.
`VolumeName`: The name of the volume to mount.
`StorePath`: The store path to mount.
`Opts`: The options to mount.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Mount an existing partition. ### mount_formatted_partition/6 * ###

mount_formatted_partition(Partition::term(), Key::term(), MountPoint::term(), VolumeName::term(), StorePath::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`Partition`: The partition to mount.
`Key`: The key to mount the partition with.
`MountPoint`: The mount point to mount the partition to.
`VolumeName`: The name of the volume to mount.
`StorePath`: The store path to mount.
`Opts`: The options to mount.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Mount a newly formatted partition. ### public_key/3 ###

public_key(M1::term(), M2::term(), Opts::map()) -> {ok, map()} | {error, binary()}

`Opts`: A map of configuration options.
returns: `{ok, Map}` containing the node's public key on success, or `{error, Binary}` if the node's wallet is not available. Returns the node's public key for secure key exchange. This function retrieves the node's wallet and extracts the public key for encryption purposes. It allows users to securely exchange encryption keys by first encrypting their volume key with the node's public key. The process ensures that sensitive keys are never transmitted in plaintext. The encrypted key can then be securely sent to the node, which will decrypt it using its private key before using it for volume encryption. ### update_node_config/2 * ###

update_node_config(NewStore::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`NewStore`: The new store to update the node's configuration with.
`Opts`: The options to update the node's configuration with.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Update the node's configuration with the new store. ### update_store_path/2 * ###

update_store_path(StorePath::term(), Opts::map()) -> {ok, binary()} | {error, binary()}

`StorePath`: The store path to update.
`Opts`: The options to update.
returns: `{ok, Binary}` on success with operation result message, or `{error, Binary}` on failure with error message. Update the store path to use the mounted volume. --- END OF FILE: docs/resources/source-code/dev_volume.md --- --- START OF FILE: docs/resources/source-code/dev_wasi.md --- # [Module dev_wasi.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_wasi.erl) A virtual filesystem device. ## Description ## Implements a file-system-as-map structure, which is traversible externally. Each file is a binary and each directory is an AO-Core message. Additionally, this module adds a series of WASI-preview-1 compatible functions for accessing the filesystem as imported functions by WASM modules. ## Function Index ##
basic_aos_exec_test/0*
clock_time_get/3
compute/1
fd_read/3Read from a file using the WASI-p1 standard interface.
fd_read/5*
fd_write/3WASM stdlib implementation of fd_write, using the WASI-p1 standard interface.
fd_write/5*
gen_test_aos_msg/1*
gen_test_env/0*
generate_wasi_stack/3*
init/0*
init/3On-boot, initialize the virtual file system with: - Empty stdio files - WASI-preview-1 compatible functions for accessing the filesystem - File descriptors for those files.
parse_iovec/2*Parse an iovec in WASI-preview-1 format.
path_open/3Adds a file descriptor to the state message.
stdout/1Return the stdout buffer from a state message.
vfs_is_serializable_test/0*
wasi_stack_is_serializable_test/0*
## Function Details ## ### basic_aos_exec_test/0 * ### `basic_aos_exec_test() -> any()` ### clock_time_get/3 ### `clock_time_get(Msg1, Msg2, Opts) -> any()` ### compute/1 ### `compute(Msg1) -> any()` ### fd_read/3 ### `fd_read(Msg1, Msg2, Opts) -> any()` Read from a file using the WASI-p1 standard interface. ### fd_read/5 * ### `fd_read(S, Instance, X3, BytesRead, Opts) -> any()` ### fd_write/3 ### `fd_write(Msg1, Msg2, Opts) -> any()` WASM stdlib implementation of `fd_write`, using the WASI-p1 standard interface. ### fd_write/5 * ### `fd_write(S, Instance, X3, BytesWritten, Opts) -> any()` ### gen_test_aos_msg/1 * ### `gen_test_aos_msg(Command) -> any()` ### gen_test_env/0 * ### `gen_test_env() -> any()` ### generate_wasi_stack/3 * ### `generate_wasi_stack(File, Func, Params) -> any()` ### init/0 * ### `init() -> any()` ### init/3 ### `init(M1, M2, Opts) -> any()` On-boot, initialize the virtual file system with: - Empty stdio files - WASI-preview-1 compatible functions for accessing the filesystem - File descriptors for those files. ### parse_iovec/2 * ### `parse_iovec(Instance, Ptr) -> any()` Parse an iovec in WASI-preview-1 format. ### path_open/3 ### `path_open(Msg1, Msg2, Opts) -> any()` Adds a file descriptor to the state message. path_open(M, Instance, [FDPtr, LookupFlag, PathPtr|_]) -> ### stdout/1 ### `stdout(M) -> any()` Return the stdout buffer from a state message. ### vfs_is_serializable_test/0 * ### `vfs_is_serializable_test() -> any()` ### wasi_stack_is_serializable_test/0 * ### `wasi_stack_is_serializable_test() -> any()` --- END OF FILE: docs/resources/source-code/dev_wasi.md --- --- START OF FILE: docs/resources/source-code/dev_wasm.md --- # [Module dev_wasm.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/dev_wasm.erl) A device that executes a WASM image on messages using the Memory-64 preview standard. ## Description ## In the backend, this device uses `beamr`: An Erlang wrapper for WAMR, the WebAssembly Micro Runtime. The device has the following requirements and interface: ``` M1/Init -> Assumes: M1/process M1/[Prefix]/image Generates: /priv/[Prefix]/instance /priv/[Prefix]/import-resolver Side-effects: Creates a WASM executor loaded in memory of the HyperBEAM node. M1/Compute -> Assumes: M1/priv/[Prefix]/instance M1/priv/[Prefix]/import-resolver M1/process M2/message M2/message/function OR M1/function M2/message/parameters OR M1/parameters Generates: /results/[Prefix]/type /results/[Prefix]/output Side-effects: Calls the WASM executor with the message and process. M1/[Prefix]/state -> Assumes: M1/priv/[Prefix]/instance Generates: Raw binary WASM state ``` ## Function Index ##
basic_execution_64_test/0*
basic_execution_test/0*
benchmark_test/0*
cache_wasm_image/1
cache_wasm_image/2
compute/3Call the WASM executor with a message that has been prepared by a prior pass.
default_import_resolver/3*Take a BEAMR import call and resolve it using hb_ao.
import/3Handle standard library calls by: 1.
imported_function_test/0*
info/2Export all functions aside the instance/3 function.
init/0*
init/3Boot a WASM image on the image stated in the process/image field of the message.
init_test/0*
input_prefix_test/0*
instance/3Get the WASM instance from the message.
normalize/3Normalize the message to have an open WASM instance, but no literal State key.
process_prefixes_test/0*Test that realistic prefixing for a dev_process works -- including both inputs (from Process/) and outputs (to the Device-Key) work.
snapshot/3Serialize the WASM state to a binary.
state_export_and_restore_test/0*
terminate/3Tear down the WASM executor.
test_run_wasm/4*
undefined_import_stub/3*Log the call to the standard library as an event, and write the call details into the message.
## Function Details ## ### basic_execution_64_test/0 * ### `basic_execution_64_test() -> any()` ### basic_execution_test/0 * ### `basic_execution_test() -> any()` ### benchmark_test/0 * ### `benchmark_test() -> any()` ### cache_wasm_image/1 ### `cache_wasm_image(Image) -> any()` ### cache_wasm_image/2 ### `cache_wasm_image(Image, Opts) -> any()` ### compute/3 ### `compute(RawM1, M2, Opts) -> any()` Call the WASM executor with a message that has been prepared by a prior pass. ### default_import_resolver/3 * ### `default_import_resolver(Msg1, Msg2, Opts) -> any()` Take a BEAMR import call and resolve it using `hb_ao`. ### import/3 ### `import(Msg1, Msg2, Opts) -> any()` Handle standard library calls by: 1. Adding the right prefix to the path from BEAMR. 2. Adding the state to the message at the stdlib path. 3. Resolving the adjusted-path-Msg2 against the added-state-Msg1. 4. If it succeeds, return the new state from the message. 5. If it fails with `not_found`, call the stub handler. ### imported_function_test/0 * ### `imported_function_test() -> any()` ### info/2 ### `info(Msg1, Opts) -> any()` Export all functions aside the `instance/3` function. ### init/0 * ### `init() -> any()` ### init/3 ### `init(M1, M2, Opts) -> any()` Boot a WASM image on the image stated in the `process/image` field of the message. ### init_test/0 * ### `init_test() -> any()` ### input_prefix_test/0 * ### `input_prefix_test() -> any()` ### instance/3 ### `instance(M1, M2, Opts) -> any()` Get the WASM instance from the message. Note that this function is exported such that other devices can use it, but it is excluded from calls from AO-Core resolution directly. ### normalize/3 ### `normalize(RawM1, M2, Opts) -> any()` Normalize the message to have an open WASM instance, but no literal `State` key. Ensure that we do not change the hashpath during this process. ### process_prefixes_test/0 * ### `process_prefixes_test() -> any()` Test that realistic prefixing for a `dev_process` works -- including both inputs (from `Process/`) and outputs (to the Device-Key) work ### snapshot/3 ### `snapshot(M1, M2, Opts) -> any()` Serialize the WASM state to a binary. ### state_export_and_restore_test/0 * ### `state_export_and_restore_test() -> any()` ### terminate/3 ### `terminate(M1, M2, Opts) -> any()` Tear down the WASM executor. ### test_run_wasm/4 * ### `test_run_wasm(File, Func, Params, AdditionalMsg) -> any()` ### undefined_import_stub/3 * ### `undefined_import_stub(Msg1, Msg2, Opts) -> any()` Log the call to the standard library as an event, and write the call details into the message. --- END OF FILE: docs/resources/source-code/dev_wasm.md --- --- START OF FILE: docs/resources/source-code/hb_ao_test_vectors.md --- # [Module hb_ao_test_vectors.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_ao_test_vectors.erl) Uses a series of different `Opts` values to test the resolution engine's execution under different circumstances. ## Function Index ##
as_path_test/1*
basic_get_test/1*
basic_set_test/1*
continue_as_test/1*
deep_recursive_get_test/1*
deep_set_new_messages_test/0*
deep_set_test/1*
deep_set_with_device_test/1*
denormalized_device_key_test/1*
device_excludes_test/1*
device_exports_test/1*
device_with_default_handler_function_test/1*
device_with_handler_function_test/1*
exec_dummy_device/2*Ensure that we can read a device from the cache then execute it.
gen_default_device/0*Create a simple test device that implements the default handler.
gen_handler_device/0*Create a simple test device that implements the handler key.
generate_device_with_keys_using_args/0*Generates a test device with three keys, each of which uses progressively more of the arguments that can be passed to a device key.
get_as_with_device_test/1*
get_with_device_test/1*
key_from_id_device_with_args_test/1*Test that arguments are passed to a device key as expected.
key_to_binary_test/1*
list_transform_test/1*
load_as_test/1*
load_device_test/0*
recursive_get_test/1*
resolve_binary_key_test/1*
resolve_from_multiple_keys_test/1*
resolve_id_test/1*
resolve_key_twice_test/1*
resolve_path_element_test/1*
resolve_simple_test/1*
run_all_test_/0*Run each test in the file with each set of options.
run_test/0*
set_with_device_test/1*
start_as_test/1*
start_as_with_parameters_test/1*
step_hook_test/1*
test_opts/0*
test_suite/0*
untrusted_load_device_test/0*
## Function Details ## ### as_path_test/1 * ### `as_path_test(Opts) -> any()` ### basic_get_test/1 * ### `basic_get_test(Opts) -> any()` ### basic_set_test/1 * ### `basic_set_test(Opts) -> any()` ### continue_as_test/1 * ### `continue_as_test(Opts) -> any()` ### deep_recursive_get_test/1 * ### `deep_recursive_get_test(Opts) -> any()` ### deep_set_new_messages_test/0 * ### `deep_set_new_messages_test() -> any()` ### deep_set_test/1 * ### `deep_set_test(Opts) -> any()` ### deep_set_with_device_test/1 * ### `deep_set_with_device_test(Opts) -> any()` ### denormalized_device_key_test/1 * ### `denormalized_device_key_test(Opts) -> any()` ### device_excludes_test/1 * ### `device_excludes_test(Opts) -> any()` ### device_exports_test/1 * ### `device_exports_test(Opts) -> any()` ### device_with_default_handler_function_test/1 * ### `device_with_default_handler_function_test(Opts) -> any()` ### device_with_handler_function_test/1 * ### `device_with_handler_function_test(Opts) -> any()` ### exec_dummy_device/2 * ### `exec_dummy_device(SigningWallet, Opts) -> any()` Ensure that we can read a device from the cache then execute it. By extension, this will also allow us to load a device from Arweave due to the remote store implementations. ### gen_default_device/0 * ### `gen_default_device() -> any()` Create a simple test device that implements the default handler. ### gen_handler_device/0 * ### `gen_handler_device() -> any()` Create a simple test device that implements the handler key. ### generate_device_with_keys_using_args/0 * ### `generate_device_with_keys_using_args() -> any()` Generates a test device with three keys, each of which uses progressively more of the arguments that can be passed to a device key. ### get_as_with_device_test/1 * ### `get_as_with_device_test(Opts) -> any()` ### get_with_device_test/1 * ### `get_with_device_test(Opts) -> any()` ### key_from_id_device_with_args_test/1 * ### `key_from_id_device_with_args_test(Opts) -> any()` Test that arguments are passed to a device key as expected. Particularly, we need to ensure that the key function in the device can specify any arity (1 through 3) and the call is handled correctly. ### key_to_binary_test/1 * ### `key_to_binary_test(Opts) -> any()` ### list_transform_test/1 * ### `list_transform_test(Opts) -> any()` ### load_as_test/1 * ### `load_as_test(Opts) -> any()` ### load_device_test/0 * ### `load_device_test() -> any()` ### recursive_get_test/1 * ### `recursive_get_test(Opts) -> any()` ### resolve_binary_key_test/1 * ### `resolve_binary_key_test(Opts) -> any()` ### resolve_from_multiple_keys_test/1 * ### `resolve_from_multiple_keys_test(Opts) -> any()` ### resolve_id_test/1 * ### `resolve_id_test(Opts) -> any()` ### resolve_key_twice_test/1 * ### `resolve_key_twice_test(Opts) -> any()` ### resolve_path_element_test/1 * ### `resolve_path_element_test(Opts) -> any()` ### resolve_simple_test/1 * ### `resolve_simple_test(Opts) -> any()` ### run_all_test_/0 * ### `run_all_test_() -> any()` Run each test in the file with each set of options. Start and reset the store for each test. ### run_test/0 * ### `run_test() -> any()` ### set_with_device_test/1 * ### `set_with_device_test(Opts) -> any()` ### start_as_test/1 * ### `start_as_test(Opts) -> any()` ### start_as_with_parameters_test/1 * ### `start_as_with_parameters_test(Opts) -> any()` ### step_hook_test/1 * ### `step_hook_test(InitOpts) -> any()` ### test_opts/0 * ### `test_opts() -> any()` ### test_suite/0 * ### `test_suite() -> any()` ### untrusted_load_device_test/0 * ### `untrusted_load_device_test() -> any()` --- END OF FILE: docs/resources/source-code/hb_ao_test_vectors.md --- --- START OF FILE: docs/resources/source-code/hb_ao.md --- # [Module hb_ao.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_ao.erl) This module is the root of the device call logic of the AO-Core protocol in HyperBEAM. ## Description ## At the implementation level, every message is simply a collection of keys, dictated by its `Device`, that can be resolved in order to yield their values. Each key may contain a link to another message or a raw value: `ao(BaseMessage, RequestMessage) -> {Status, Result}` Under-the-hood, `AO-Core(BaseMessage, RequestMessage)` leads to a lookup of the `device` key of the base message, followed by the evaluation of `DeviceMod:PathPart(BaseMessage, RequestMessage)`, which defines the user compute to be performed. If `BaseMessage` does not specify a device, `~message@1.0` is assumed. The key to resolve is specified by the `path` field of the message. After each output, the `HashPath` is updated to include the `RequestMessage` that was executed upon it. Because each message implies a device that can resolve its keys, as well as generating a merkle tree of the computation that led to the result, you can see the AO-Core protocol as a system for cryptographically chaining the execution of `combinators`. See `docs/ao-core-protocol.md` for more information about AO-Core. The `key(BaseMessage, RequestMessage)` pattern is repeated throughout the HyperBEAM codebase, sometimes with `BaseMessage` replaced with `Msg1`, `M1` or similar, and `RequestMessage` replaced with `Msg2`, `M2`, etc. The result of any computation can be either a new message or a raw literal value (a binary, integer, float, atom, or list of such values). Devices can be expressed as either modules or maps. They can also be referenced by an Arweave ID, which can be used to load a device from the network (depending on the value of the `load_remote_devices` and `trusted_device_signers` environment settings). HyperBEAM device implementations are defined as follows: ``` DevMod:ExportedFunc : Key resolution functions. All are assumed to be device keys (thus, present in every message that uses it) unless specified by DevMod:info(). Each function takes a set of parameters of the form DevMod:KeyHandler(Msg1, Msg2, Opts). Each of these arguments can be ommitted if not needed. Non-exported functions are not assumed to be device keys. DevMod:info : Optional. Returns a map of options for the device. All options are optional and assumed to be the defaults if not specified. This function can accept a Message1 as an argument, allowing it to specify its functionality based on a specific message if appropriate. info/exports : Overrides the export list of the Erlang module, such that only the functions in this list are assumed to be device keys. Defaults to all of the functions that DevMod exports in the Erlang environment. info/excludes : A list of keys that should not be resolved by the device, despite being present in the Erlang module exports list. info/handler : A function that should be used to handle _all_ keys for messages using the device. info/default : A function that should be used to handle all keys that are not explicitly implemented by the device. Defaults to the dev_message device, which contains general keys for interacting with messages. info/default_mod : A different device module that should be used to handle all keys that are not explicitly implemented by the device. Defaults to the dev_message device. info/grouper : A function that returns the concurrency 'group' name for an execution. Executions with the same group name will be executed by sending a message to the associated process and waiting for a response. This allows you to control concurrency of execution and to allow executions to share in-memory state as applicable. Default: A derivation of Msg1+Msg2. This means that concurrent calls for the same output will lead to only a single execution. info/worker : A function that should be run as the 'server' loop of the executor for interactions using the device. The HyperBEAM resolver also takes a number of runtime options that change the way that the environment operates:update_hashpath: Whether to add the Msg2 to HashPath for the Msg3. Default: true.add_key: Whether to add the key to the start of the arguments. Default: . ``` ## Function Index ##
deep_set/4Recursively search a map, resolving keys, and set the value of the key at the given path.
default_module/0*The default device is the identity device, which simply returns the value associated with any key as it exists in its Erlang map.
device_set/4*Call the device's set function.
device_set/5*
do_resolve_many/2*
ensure_message_loaded/2*Ensure that a message is loaded from the cache if it is an ID, or a link, such that it is ready for execution.
error_execution/5*Handle an error in a device call.
error_infinite/3*Catch all return if we are in an infinite loop.
error_invalid_intermediate_status/5*
error_invalid_message/3*Catch all return if the message is invalid.
find_exported_function/5Find the function with the highest arity that has the given name, if it exists.
force_message/2
get/2Shortcut for resolving a key in a message without its status if it is ok.
get/3
get/4
get_first/2take a sequence of base messages and paths, then return the value of the first message that can be resolved using a path.
get_first/3
info/2Get the info map for a device, optionally giving it a message if the device's info function is parameterized by one.
info/3*
info_handler_to_fun/4*Parse a handler key given by a device's info.
internal_opts/1*The execution options that are used internally by this module when calling itself.
is_exported/3*
is_exported/4Check if a device is guarding a key via its exports list.
keys/1Shortcut to get the list of keys from a message.
keys/2
keys/3
load_device/2Load a device module from its name or a message ID.
maybe_force_message/2*Force the result of a device call into a message if the result is not requested by the Opts.
message_to_device/2Extract the device module from a message.
message_to_fun/3Calculate the Erlang function that should be called to get a value for a given key from a device.
normalize_key/1Convert a key to a binary in normalized form.
normalize_key/2
normalize_keys/1Ensure that a message is processable by the AO-Core resolver: No lists.
normalize_keys/2
remove/2Remove a key from a message, using its underlying device.
remove/3
resolve/2Get the value of a message's key by running its associated device function.
resolve/3
resolve_many/2Resolve a list of messages in sequence.
resolve_stage/4*
resolve_stage/5*
resolve_stage/6*
set/3Shortcut for setting a key in the message using its underlying device.
set/4
subresolve/4*Execute a sub-resolution.
truncate_args/2Truncate the arguments of a function to the number of arguments it actually takes.
verify_device_compatibility/2*Verify that a device is compatible with the current machine.
## Function Details ## ### deep_set/4 ### `deep_set(Msg, Rest, Value, Opts) -> any()` Recursively search a map, resolving keys, and set the value of the key at the given path. This function has special cases for handling `set` calls where the path is an empty list (`/`). In this case, if the value is an immediate, non-complex term, we can set it directly. Otherwise, we use the device's `set` function to set the value. ### default_module/0 * ### `default_module() -> any()` The default device is the identity device, which simply returns the value associated with any key as it exists in its Erlang map. It should also implement the `set` key, which returns a `Message3` with the values changed according to the `Message2` passed to it. ### device_set/4 * ### `device_set(Msg, Key, Value, Opts) -> any()` Call the device's `set` function. ### device_set/5 * ### `device_set(Msg, Key, Value, Mode, Opts) -> any()` ### do_resolve_many/2 * ### `do_resolve_many(MsgList, Opts) -> any()` ### ensure_message_loaded/2 * ### `ensure_message_loaded(MsgID, Opts) -> any()` Ensure that a message is loaded from the cache if it is an ID, or a link, such that it is ready for execution. ### error_execution/5 * ### `error_execution(ExecGroup, Msg2, Whence, X4, Opts) -> any()` Handle an error in a device call. ### error_infinite/3 * ### `error_infinite(Msg1, Msg2, Opts) -> any()` Catch all return if we are in an infinite loop. ### error_invalid_intermediate_status/5 * ### `error_invalid_intermediate_status(Msg1, Msg2, Msg3, RemainingPath, Opts) -> any()` ### error_invalid_message/3 * ### `error_invalid_message(Msg1, Msg2, Opts) -> any()` Catch all return if the message is invalid. ### find_exported_function/5 ### `find_exported_function(Msg, Dev, Key, MaxArity, Opts) -> any()` Find the function with the highest arity that has the given name, if it exists. If the device is a module, we look for a function with the given name. If the device is a map, we look for a key in the map. First we try to find the key using its literal value. If that fails, we cast the key to an atom and try again. ### force_message/2 ### `force_message(X1, Opts) -> any()` ### get/2 ### `get(Path, Msg) -> any()` Shortcut for resolving a key in a message without its status if it is `ok`. This makes it easier to write complex logic on top of messages while maintaining a functional style. Additionally, this function supports the `{as, Device, Msg}` syntax, which allows the key to be resolved using another device to resolve the key, while maintaining the tracability of the `HashPath` of the output message. Returns the value of the key if it is found, otherwise returns the default provided by the user, or `not_found` if no default is provided. ### get/3 ### `get(Path, Msg, Opts) -> any()` ### get/4 ### `get(Path, Msg, Default, Opts) -> any()` ### get_first/2 ### `get_first(Paths, Opts) -> any()` take a sequence of base messages and paths, then return the value of the first message that can be resolved using a path. ### get_first/3 ### `get_first(Msgs, Default, Opts) -> any()` ### info/2 ### `info(Msg, Opts) -> any()` Get the info map for a device, optionally giving it a message if the device's info function is parameterized by one. ### info/3 * ### `info(DevMod, Msg, Opts) -> any()` ### info_handler_to_fun/4 * ### `info_handler_to_fun(Handler, Msg, Key, Opts) -> any()` Parse a handler key given by a device's `info`. ### internal_opts/1 * ### `internal_opts(Opts) -> any()` The execution options that are used internally by this module when calling itself. ### is_exported/3 * ### `is_exported(Info, Key, Opts) -> any()` ### is_exported/4 ### `is_exported(Msg, Dev, Key, Opts) -> any()` Check if a device is guarding a key via its `exports` list. Defaults to true if the device does not specify an `exports` list. The `info` function is always exported, if it exists. Elements of the `exludes` list are not exported. Note that we check for info _twice_ -- once when the device is given but the info result is not, and once when the info result is given. The reason for this is that `info/3` calls other functions that may need to check if a key is exported, so we must avoid infinite loops. We must, however, also return a consistent result in the case that only the info result is given, so we check for it in both cases. ### keys/1 ### `keys(Msg) -> any()` Shortcut to get the list of keys from a message. ### keys/2 ### `keys(Msg, Opts) -> any()` ### keys/3 ### `keys(Msg, Opts, X3) -> any()` ### load_device/2 ### `load_device(Map, Opts) -> any()` Load a device module from its name or a message ID. Returns {ok, Executable} where Executable is the device module. On error, a tuple of the form {error, Reason} is returned. ### maybe_force_message/2 * ### `maybe_force_message(X1, Opts) -> any()` Force the result of a device call into a message if the result is not requested by the `Opts`. If the result is a literal, we wrap it in a message and signal the location of the result inside. We also similarly handle ao-result when the result is a single value and an explicit status code. ### message_to_device/2 ### `message_to_device(Msg, Opts) -> any()` Extract the device module from a message. ### message_to_fun/3 ### `message_to_fun(Msg, Key, Opts) -> any()` Calculate the Erlang function that should be called to get a value for a given key from a device. This comes in 7 forms: 1. The message does not specify a device, so we use the default device. 2. The device has a `handler` key in its `Dev:info()` map, which is a function that takes a key and returns a function to handle that key. We pass the key as an additional argument to this function. 3. The device has a function of the name `Key`, which should be called directly. 4. The device does not implement the key, but does have a default handler for us to call. We pass it the key as an additional argument. 5. The device does not implement the key, and has no default handler. We use the default device to handle the key. Error: If the device is specified, but not loadable, we raise an error. Returns {ok | add_key, Fun} where Fun is the function to call, and add_key indicates that the key should be added to the start of the call's arguments. ### normalize_key/1 ### `normalize_key(Key) -> any()` Convert a key to a binary in normalized form. ### normalize_key/2 ### `normalize_key(Key, Opts) -> any()` ### normalize_keys/1 ### `normalize_keys(Msg) -> any()` Ensure that a message is processable by the AO-Core resolver: No lists. ### normalize_keys/2 ### `normalize_keys(Msg1, Opts) -> any()` ### remove/2 ### `remove(Msg, Key) -> any()` Remove a key from a message, using its underlying device. ### remove/3 ### `remove(Msg, Key, Opts) -> any()` ### resolve/2 ### `resolve(Path, Opts) -> any()` Get the value of a message's key by running its associated device function. Optionally, takes options that control the runtime environment. This function returns the raw result of the device function call: `{ok | error, NewMessage}.` The resolver is composed of a series of discrete phases: 1: Normalization. 2: Cache lookup. 3: Validation check. 4: Persistent-resolver lookup. 5: Device lookup. 6: Execution. 7: Execution of the `step` hook. 8: Subresolution. 9: Cryptographic linking. 10: Result caching. 11: Notify waiters. 12: Fork worker. 13: Recurse or terminate. ### resolve/3 ### `resolve(Msg1, Path, Opts) -> any()` ### resolve_many/2 ### `resolve_many(ListMsg, Opts) -> any()` Resolve a list of messages in sequence. Take the output of the first message as the input for the next message. Once the last message is resolved, return the result. A `resolve_many` call with only a single ID will attempt to read the message directly from the store. No execution is performed. ### resolve_stage/4 * ### `resolve_stage(X1, Link, Msg2, Opts) -> any()` ### resolve_stage/5 * ### `resolve_stage(X1, Msg1, Msg2, ExecName, Opts) -> any()` ### resolve_stage/6 * ### `resolve_stage(X1, Func, Msg1, Msg2, ExecName, Opts) -> any()` ### set/3 ### `set(RawMsg1, RawMsg2, Opts) -> any()` Shortcut for setting a key in the message using its underlying device. Like the `get/3` function, this function honors the `error_strategy` option. `set` works with maps and recursive paths while maintaining the appropriate `HashPath` for each step. ### set/4 ### `set(Msg1, Key, Value, Opts) -> any()` ### subresolve/4 * ### `subresolve(RawMsg1, DevID, ReqPath, Opts) -> any()` Execute a sub-resolution. ### truncate_args/2 ### `truncate_args(Fun, Args) -> any()` Truncate the arguments of a function to the number of arguments it actually takes. ### verify_device_compatibility/2 * ### `verify_device_compatibility(Msg, Opts) -> any()` Verify that a device is compatible with the current machine. --- END OF FILE: docs/resources/source-code/hb_ao.md --- --- START OF FILE: docs/resources/source-code/hb_app.md --- # [Module hb_app.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_app.erl) The main HyperBEAM application module. __Behaviours:__ [`application`](application.md). ## Function Index ##
start/2
stop/1
## Function Details ## ### start/2 ### `start(StartType, StartArgs) -> any()` ### stop/1 ### `stop(State) -> any()` --- END OF FILE: docs/resources/source-code/hb_app.md --- --- START OF FILE: docs/resources/source-code/hb_beamr_io.md --- # [Module hb_beamr_io.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_beamr_io.erl) Simple interface for memory management for Beamr instances. ## Description ## It allows for reading and writing to memory, as well as allocating and freeing memory by calling the WASM module's exported malloc and free functions. Unlike the majority of HyperBEAM modules, this module takes a defensive approach to type checking, breaking from the conventional Erlang style, such that failures are caught in the Erlang-side of functions rather than in the C/WASM-side. ## Function Index ##
do_read_string/3*
free/2Free space allocated in the Beamr instance's native memory via a call to the exported free function from the WASM.
malloc/2Allocate space for (via an exported malloc function from the WASM) in the Beamr instance's native memory.
malloc_test/0*Test allocating and freeing memory.
read/3Read a binary from the Beamr instance's native memory at a given offset and of a given size.
read_string/2Simple helper function to read a string from the Beamr instance's native memory at a given offset.
read_string/3*
read_test/0*Test reading memory in and out of bounds.
size/1Get the size (in bytes) of the native memory allocated in the Beamr instance.
size_test/0*
string_write_and_read_test/0*Write and read strings to memory.
write/3Write a binary to the Beamr instance's native memory at a given offset.
write_string/2Simple helper function to allocate space for (via malloc) and write a string to the Beamr instance's native memory.
write_test/0*Test writing memory in and out of bounds.
## Function Details ## ### do_read_string/3 * ### `do_read_string(WASM, Offset, ChunkSize) -> any()` ### free/2 ### `free(WASM, Ptr) -> any()` Free space allocated in the Beamr instance's native memory via a call to the exported free function from the WASM. ### malloc/2 ### `malloc(WASM, Size) -> any()` Allocate space for (via an exported malloc function from the WASM) in the Beamr instance's native memory. ### malloc_test/0 * ### `malloc_test() -> any()` Test allocating and freeing memory. ### read/3 ### `read(WASM, Offset, Size) -> any()` Read a binary from the Beamr instance's native memory at a given offset and of a given size. ### read_string/2 ### `read_string(Port, Offset) -> any()` Simple helper function to read a string from the Beamr instance's native memory at a given offset. Memory is read by default in chunks of 8 bytes, but this can be overridden by passing a different chunk size. Strings are assumed to be null-terminated. ### read_string/3 * ### `read_string(WASM, Offset, ChunkSize) -> any()` ### read_test/0 * ### `read_test() -> any()` Test reading memory in and out of bounds. ### size/1 ### `size(WASM) -> any()` Get the size (in bytes) of the native memory allocated in the Beamr instance. Note that WASM memory can never be reduced once granted to an instance (although it can, of course, be reallocated _inside_ the environment). ### size_test/0 * ### `size_test() -> any()` ### string_write_and_read_test/0 * ### `string_write_and_read_test() -> any()` Write and read strings to memory. ### write/3 ### `write(WASM, Offset, Data) -> any()` Write a binary to the Beamr instance's native memory at a given offset. ### write_string/2 ### `write_string(WASM, Data) -> any()` Simple helper function to allocate space for (via malloc) and write a string to the Beamr instance's native memory. This can be helpful for easily pushing a string into the instance, such that the resulting pointer can be passed to exported functions from the instance. Assumes that the input is either an iolist or a binary, adding a null byte to the end of the string. ### write_test/0 * ### `write_test() -> any()` Test writing memory in and out of bounds. --- END OF FILE: docs/resources/source-code/hb_beamr_io.md --- --- START OF FILE: docs/resources/source-code/hb_beamr.md --- # [Module hb_beamr.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_beamr.erl) BEAMR: A WAMR wrapper for BEAM. ## Description ## Beamr is a library that allows you to run WASM modules in BEAM, using the Webassembly Micro Runtime (WAMR) as its engine. Each WASM module is executed using a Linked-In Driver (LID) that is loaded into BEAM. It is designed with a focus on supporting long-running WASM executions that interact with Erlang functions and processes easily. Because each WASM module runs as an independent async worker, if you plan to run many instances in parallel, you should be sure to configure the BEAM to have enough async worker threads enabled (see `erl +A N` in the Erlang manuals). The core API is simple: ``` start(WasmBinary) -> {ok, Port, Imports, Exports} Where: WasmBinary is the WASM binary to load. Port is the port to the LID. Imports is a list of tuples of the form {Module, Function, Args, Signature}. Exports is a list of tuples of the form {Function, Args, Signature}. stop(Port) -> ok call(Port, FunctionName, Args) -> {ok, Result} Where: FunctionName is the name of the function to call. Args is a list of Erlang terms (converted to WASM values by BEAMR) that match the signature of the function. Result is a list of Erlang terms (converted from WASM values). call(Port, FunName, Args[, Import, State, Opts]) -> {ok, Res, NewState} Where: ImportFun is a function that will be called upon each import. ImportFun must have an arity of 2: Taking an arbitrary state term, and a map containing the port, module, func, args,signature, and the options map of the import. It must return a tuple of the form {ok, Response, NewState}. serialize(Port) -> {ok, Mem} Where: Port is the port to the LID. Mem is a binary representing the full WASM state. deserialize(Port, Mem) -> ok Where: Port is the port to the LID. Mem is a binary output of a previous serialize/1 call. ``` BEAMR was designed for use in the HyperBEAM project, but is suitable for deployment in other Erlang applications that need to run WASM modules. PRs are welcome. ## Function Index ##
benchmark_test/0*
call/3Call a function in the WASM executor (see moduledoc for more details).
call/4
call/5
call/6
deserialize/2Deserialize a WASM state from a binary.
dispatch_response/2*Check the type of an import response and dispatch it to a Beamr port.
driver_loads_test/0*
imported_function_test/0*Test that imported functions can be called from the WASM module.
is_valid_arg_list/1*Check that a list of arguments is valid for a WASM function call.
load_driver/0*Load the driver for the WASM executor.
monitor_call/4*Synchonously monitor the WASM executor for a call result and any imports that need to be handled.
multiclient_test/0*Ensure that processes outside of the initial one can interact with the WASM executor.
serialize/1Serialize the WASM state to a binary.
simple_wasm_test/0*Test standalone hb_beamr correctly after loading a WASM module.
start/1Start a WASM executor context.
start/2
stop/1Stop a WASM executor context.
stub/3Stub import function for the WASM executor.
wasm64_test/0*Test that WASM Memory64 modules load and execute correctly.
wasm_send/2
worker/2*A worker process that is responsible for handling a WASM instance.
## Function Details ## ### benchmark_test/0 * ### `benchmark_test() -> any()` ### call/3 ### `call(PID, FuncRef, Args) -> any()` Call a function in the WASM executor (see moduledoc for more details). ### call/4 ### `call(PID, FuncRef, Args, ImportFun) -> any()` ### call/5 ### `call(PID, FuncRef, Args, ImportFun, StateMsg) -> any()` ### call/6 ### `call(PID, FuncRef, Args, ImportFun, StateMsg, Opts) -> any()` ### deserialize/2 ### `deserialize(WASM, Bin) -> any()` Deserialize a WASM state from a binary. ### dispatch_response/2 * ### `dispatch_response(WASM, Term) -> any()` Check the type of an import response and dispatch it to a Beamr port. ### driver_loads_test/0 * ### `driver_loads_test() -> any()` ### imported_function_test/0 * ### `imported_function_test() -> any()` Test that imported functions can be called from the WASM module. ### is_valid_arg_list/1 * ### `is_valid_arg_list(Args) -> any()` Check that a list of arguments is valid for a WASM function call. ### load_driver/0 * ### `load_driver() -> any()` Load the driver for the WASM executor. ### monitor_call/4 * ### `monitor_call(WASM, ImportFun, StateMsg, Opts) -> any()` Synchonously monitor the WASM executor for a call result and any imports that need to be handled. ### multiclient_test/0 * ### `multiclient_test() -> any()` Ensure that processes outside of the initial one can interact with the WASM executor. ### serialize/1 ### `serialize(WASM) -> any()` Serialize the WASM state to a binary. ### simple_wasm_test/0 * ### `simple_wasm_test() -> any()` Test standalone `hb_beamr` correctly after loading a WASM module. ### start/1 ### `start(WasmBinary) -> any()` Start a WASM executor context. Yields a port to the LID, and the imports and exports of the WASM module. Optionally, specify a mode (wasm or aot) to indicate the type of WASM module being loaded. ### start/2 ### `start(WasmBinary, Mode) -> any()` ### stop/1 ### `stop(WASM) -> any()` Stop a WASM executor context. ### stub/3 ### `stub(Msg1, Msg2, Opts) -> any()` Stub import function for the WASM executor. ### wasm64_test/0 * ### `wasm64_test() -> any()` Test that WASM Memory64 modules load and execute correctly. ### wasm_send/2 ### `wasm_send(WASM, Message) -> any()` ### worker/2 * ### `worker(Port, Listener) -> any()` A worker process that is responsible for handling a WASM instance. It wraps the WASM port, handling inputs and outputs from the WASM module. The last sender to the port is always the recipient of its messages, so be careful to ensure that there is only one active sender to the port at any time. --- END OF FILE: docs/resources/source-code/hb_beamr.md --- --- START OF FILE: docs/resources/source-code/hb_cache_control.md --- # [Module hb_cache_control.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_cache_control.erl) Cache control logic for the AO-Core resolver. ## Description ## It derives cache settings from request, response, execution-local node Opts, as well as the global node Opts. It applies these settings when asked to maybe store/lookup in response to a request. ## Function Index ##
cache_binary_result_test/0*
cache_message_result_test/0*
cache_source_to_cache_settings/2*Convert a cache source to a cache setting.
derive_cache_settings/2*Derive cache settings from a series of option sources and the opts, honoring precidence order.
dispatch_cache_write/4*Dispatch the cache write to a worker process if requested.
empty_message_list_test/0*
exec_likely_faster_heuristic/3*Determine whether we are likely to be faster looking up the result in our cache (hoping we have it), or executing it directly.
hashpath_ignore_prevents_storage_test/0*
is_explicit_lookup/3*
lookup/3*
maybe_lookup/3Handles cache lookup, modulated by the caching options requested by the user.
maybe_set/3*Takes a key and two maps, returning the first map with the key set to the value of the second map _if_ the value is not undefined.
maybe_store/4Write a resulting M3 message to the cache if requested.
message_source_cache_control_test/0*
message_without_cache_control_test/0*
msg_precidence_overrides_test/0*
msg_with_cc/1*
multiple_directives_test/0*
necessary_messages_not_found_error/3*Generate a message to return when the necessary messages to execute a cache lookup are not found in the cache.
no_cache_directive_test/0*
no_store_directive_test/0*
only_if_cached_directive_test/0*
only_if_cached_not_found_error/3*Generate a message to return when only_if_cached was specified, and we don't have a cached result.
opts_override_message_settings_test/0*
opts_source_cache_control_test/0*
opts_with_cc/1*
specifiers_to_cache_settings/1*Convert a cache control list as received via HTTP headers into a normalized map of simply whether we should store and/or lookup the result.
## Function Details ## ### cache_binary_result_test/0 * ### `cache_binary_result_test() -> any()` ### cache_message_result_test/0 * ### `cache_message_result_test() -> any()` ### cache_source_to_cache_settings/2 * ### `cache_source_to_cache_settings(Msg, Opts) -> any()` Convert a cache source to a cache setting. The setting _must_ always be directly in the source, not an AO-Core-derivable value. The `to_cache_control_map` function is used as the source of settings in all cases, except where an `Opts` specifies that hashpaths should not be updated, which leads to the result not being cached (as it may be stored with an incorrect hashpath). ### derive_cache_settings/2 * ### `derive_cache_settings(SourceList, Opts) -> any()` Derive cache settings from a series of option sources and the opts, honoring precidence order. The Opts is used as the first source. Returns a map with `store` and `lookup` keys, each of which is a boolean. For example, if the last source has a `no_store`, the first expresses no preference, but the Opts has `cache_control => [always]`, then the result will contain a `store => true` entry. ### dispatch_cache_write/4 * ### `dispatch_cache_write(Msg1, Msg2, Msg3, Opts) -> any()` Dispatch the cache write to a worker process if requested. Invoke the appropriate cache write function based on the type of the message. ### empty_message_list_test/0 * ### `empty_message_list_test() -> any()` ### exec_likely_faster_heuristic/3 * ### `exec_likely_faster_heuristic(Msg1, Msg2, Opts) -> any()` Determine whether we are likely to be faster looking up the result in our cache (hoping we have it), or executing it directly. ### hashpath_ignore_prevents_storage_test/0 * ### `hashpath_ignore_prevents_storage_test() -> any()` ### is_explicit_lookup/3 * ### `is_explicit_lookup(Msg1, X2, Opts) -> any()` ### lookup/3 * ### `lookup(Msg1, Msg2, Opts) -> any()` ### maybe_lookup/3 ### `maybe_lookup(Msg1, Msg2, Opts) -> any()` Handles cache lookup, modulated by the caching options requested by the user. Honors the following `Opts` cache keys: `only_if_cached`: If set and we do not find a result in the cache, return an error with a `Cache-Status` of `miss` and a 504 `Status`. `no_cache`: If set, the cached values are never used. Returns `continue` to the caller. ### maybe_set/3 * ### `maybe_set(Map1, Map2, Opts) -> any()` Takes a key and two maps, returning the first map with the key set to the value of the second map _if_ the value is not undefined. ### maybe_store/4 ### `maybe_store(Msg1, Msg2, Msg3, Opts) -> any()` Write a resulting M3 message to the cache if requested. The precedence order of cache control sources is as follows: 1. The `Opts` map (letting the node operator have the final say). 2. The `Msg3` results message (granted by Msg1's device). 3. The `Msg2` message (the user's request). Msg1 is not used, such that it can specify cache control information about itself, without affecting its outputs. ### message_source_cache_control_test/0 * ### `message_source_cache_control_test() -> any()` ### message_without_cache_control_test/0 * ### `message_without_cache_control_test() -> any()` ### msg_precidence_overrides_test/0 * ### `msg_precidence_overrides_test() -> any()` ### msg_with_cc/1 * ### `msg_with_cc(CC) -> any()` ### multiple_directives_test/0 * ### `multiple_directives_test() -> any()` ### necessary_messages_not_found_error/3 * ### `necessary_messages_not_found_error(Msg1, Msg2, Opts) -> any()` Generate a message to return when the necessary messages to execute a cache lookup are not found in the cache. ### no_cache_directive_test/0 * ### `no_cache_directive_test() -> any()` ### no_store_directive_test/0 * ### `no_store_directive_test() -> any()` ### only_if_cached_directive_test/0 * ### `only_if_cached_directive_test() -> any()` ### only_if_cached_not_found_error/3 * ### `only_if_cached_not_found_error(Msg1, Msg2, Opts) -> any()` Generate a message to return when `only_if_cached` was specified, and we don't have a cached result. ### opts_override_message_settings_test/0 * ### `opts_override_message_settings_test() -> any()` ### opts_source_cache_control_test/0 * ### `opts_source_cache_control_test() -> any()` ### opts_with_cc/1 * ### `opts_with_cc(CC) -> any()` ### specifiers_to_cache_settings/1 * ### `specifiers_to_cache_settings(CCSpecifier) -> any()` Convert a cache control list as received via HTTP headers into a normalized map of simply whether we should store and/or lookup the result. --- END OF FILE: docs/resources/source-code/hb_cache_control.md --- --- START OF FILE: docs/resources/source-code/hb_cache_render.md --- # [Module hb_cache_render.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_cache_render.erl) A module that helps to render given Key graphs into the .dot files. ## Function Index ##
add_arc/5*Add an arc to the graph.
add_node/4*Add a node to the graph.
cache_path_to_dot/2Generate a dot file from a cache path and options/store.
cache_path_to_dot/3
cache_path_to_graph/3Main function to collect graph elements.
cache_path_to_graph/4*
collect_output/2*Helper function to collect output from port.
dot_to_svg/1Convert a dot graph to SVG format.
extract_label/1*Extract a label from a path.
get_graph_data/3Get graph data for the Three.js visualization.
get_label/1*Extract a readable label from a path.
get_node_type/1*Convert node color from hb_cache_render to node type for visualization.
graph_to_dot/2*Generate the DOT file from the graph.
prepare_deeply_nested_complex_message/0
prepare_signed_data/0
prepare_unsigned_data/0
process_composite_node/7*Process a composite (directory) node.
process_simple_node/7*Process a simple (leaf) node.
render/1Render the given Key into svg.
render/2
test_signed/2*
test_unsigned/1*
traverse_store/5*Traverse the store recursively to build the graph.
## Function Details ## ### add_arc/5 * ### `add_arc(Graph, From, To, Label, Opts) -> any()` Add an arc to the graph ### add_node/4 * ### `add_node(Graph, ID, Color, Opts) -> any()` Add a node to the graph ### cache_path_to_dot/2 ### `cache_path_to_dot(ToRender, StoreOrOpts) -> any()` Generate a dot file from a cache path and options/store ### cache_path_to_dot/3 ### `cache_path_to_dot(ToRender, RenderOpts, StoreOrOpts) -> any()` ### cache_path_to_graph/3 ### `cache_path_to_graph(ToRender, GraphOpts, StoreOrOpts) -> any()` Main function to collect graph elements ### cache_path_to_graph/4 * ### `cache_path_to_graph(InitPath, GraphOpts, Store, Opts) -> any()` ### collect_output/2 * ### `collect_output(Port, Acc) -> any()` Helper function to collect output from port ### dot_to_svg/1 ### `dot_to_svg(DotInput) -> any()` Convert a dot graph to SVG format ### extract_label/1 * ### `extract_label(Path) -> any()` Extract a label from a path ### get_graph_data/3 ### `get_graph_data(Base, MaxSize, Opts) -> any()` Get graph data for the Three.js visualization ### get_label/1 * ### `get_label(Path) -> any()` Extract a readable label from a path ### get_node_type/1 * ### `get_node_type(Color) -> any()` Convert node color from hb_cache_render to node type for visualization ### graph_to_dot/2 * ### `graph_to_dot(Graph, Opts) -> any()` Generate the DOT file from the graph ### prepare_deeply_nested_complex_message/0 ### `prepare_deeply_nested_complex_message() -> any()` ### prepare_signed_data/0 ### `prepare_signed_data() -> any()` ### prepare_unsigned_data/0 ### `prepare_unsigned_data() -> any()` ### process_composite_node/7 * ### `process_composite_node(Store, Key, Parent, ResolvedPath, JoinedPath, Graph, Opts) -> any()` Process a composite (directory) node ### process_simple_node/7 * ### `process_simple_node(Store, Key, Parent, ResolvedPath, JoinedPath, Graph, Opts) -> any()` Process a simple (leaf) node ### render/1 ### `render(StoreOrOpts) -> any()` Render the given Key into svg ### render/2 ### `render(ToRender, StoreOrOpts) -> any()` ### test_signed/2 * ### `test_signed(Data, Wallet) -> any()` ### test_unsigned/1 * ### `test_unsigned(Data) -> any()` ### traverse_store/5 * ### `traverse_store(Store, Path, Parent, Graph, Opts) -> any()` Traverse the store recursively to build the graph --- END OF FILE: docs/resources/source-code/hb_cache_render.md --- --- START OF FILE: docs/resources/source-code/hb_cache.md --- # [Module hb_cache.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_cache.erl) A cache of AO-Core protocol messages and compute results. ## Description ## HyperBEAM stores all paths in key value stores, abstracted by the `hb_store` module. Each store has its own storage backend, but each works with simple key-value pairs. Each store can write binary keys at paths, and link between paths. There are three layers to HyperBEAMs internal data representation on-disk: 1. The raw binary data, written to the store at the hash of the content. Storing binary paths in this way effectively deduplicates the data. 2. The hashpath-graph of all content, stored as a set of links between hashpaths, their keys, and the data that underlies them. This allows all messages to share the same hashpath space, such that all requests from users additively fill-in the hashpath space, minimizing duplicated compute. 3. Messages, referrable by their IDs (committed or uncommitted). These are stored as a set of links commitment IDs and the uncommitted message. Before writing a message to the store, we convert it to Type-Annotated Binary Messages (TABMs), such that each of the keys in the message is either a map or a direct binary. Nested keys are lazily loaded from the stores, such that large deeply nested messages where only a small part of the data is actually used are not loaded into memory unnecessarily. In order to ensure that a message is loaded from the cache after a `read`, we can use the `ensure_loaded/1` and `ensure_all_loaded/1` functions. Ensure loaded will load the exact value that has been requested, while ensure all loaded will load the entire structure of the message into memory. Lazily loadable `links` are expressed as a tuple of the following form: `{link, ID, LinkOpts}`, where `ID` is the path to the data in the store, and `LinkOpts` is a map of suggested options to use when loading the data. In particular, this module ensures to stash the `store` option in `LinkOpts`, such that the `read` function can use the correct store without having to search unnecessarily. By providing an `Opts` argument to `ensure_loaded` or `ensure_all_loaded`, the caller can specify additional options to use when loading the data -- overriding the suggested options in the link. ## Function Index ##
cache_suite_test_/0*
calculate_all_ids/2*Calculate the IDs for a message.
commitment_path/2*Generate the commitment path for a given base path.
do_write_message/3*
ensure_all_loaded/1Ensure that all of the components of a message (whether a map, list, or immediate value) are recursively fully loaded from the stores into memory.
ensure_all_loaded/2
ensure_loaded/1Ensure that a value is loaded from the cache if it is an ID or a link.
ensure_loaded/2
link/3Make a link from one path to another in the store.
list/2List all items under a given path.
list_numbered/2List all items in a directory, assuming they are numbered.
prepare_commitments/2*The structured@1.0 encoder does not typically encode commitments, subsequently, when we encounter a commitments message we prepare its contents separately, then write each to the store.
prepare_links/4*Prepare a set of links from a listing of subpaths.
read/2Read the message at a path.
read_ao_types/4*Read and parse the ao-types for a given path if it is in the supplied list of subpaths, returning a map of keys and their types.
read_resolved/3Read the output of a prior computation, given Msg1, Msg2, and some options.
run_test/0*
store_read/3*List all of the subpaths of a given path and return a map of keys and links to the subpaths, including their types.
test_deeply_nested_complex_message/1*Test deeply nested item storage and retrieval.
test_device_map_cannot_be_written_test/0*Test that message whose device is #{} cannot be written.
test_message_with_list/1*
test_signed/1
test_signed/2*
test_store_ans104_message/1*
test_store_binary/1*
test_store_simple_signed_message/1*Test storing and retrieving a simple unsigned item.
test_store_simple_unsigned_message/1*Test storing and retrieving a simple unsigned item.
test_store_unsigned_empty_message/1*
test_store_unsigned_nested_empty_message/1*
test_unsigned/1
to_integer/1*
types_to_implicit/1*Convert a map of ao-types to an implicit map of types.
write/2Write a message to the cache.
write_binary/3Write a raw binary keys into the store and link it at a given hashpath.
write_binary/4*
write_hashpath/2Write a hashpath and its message to the store and link it.
write_hashpath/3*
write_key/6*Write a single key for a message into the store.
## Function Details ## ### cache_suite_test_/0 * ### `cache_suite_test_() -> any()` ### calculate_all_ids/2 * ### `calculate_all_ids(Bin, Opts) -> any()` Calculate the IDs for a message. ### commitment_path/2 * ### `commitment_path(Base, Opts) -> any()` Generate the commitment path for a given base path. ### do_write_message/3 * ### `do_write_message(Bin, Store, Opts) -> any()` ### ensure_all_loaded/1 ### `ensure_all_loaded(Msg) -> any()` Ensure that all of the components of a message (whether a map, list, or immediate value) are recursively fully loaded from the stores into memory. This is a catch-all function that is useful in situations where ensuring a message contains no links is important, but it carries potentially extreme performance costs. ### ensure_all_loaded/2 ### `ensure_all_loaded(Link, Opts) -> any()` ### ensure_loaded/1 ### `ensure_loaded(Msg) -> any()` Ensure that a value is loaded from the cache if it is an ID or a link. If it is not loadable we raise an error. If the value is a message, we will load only the first `layer` of it: Representing all nested messages inside the result as links. If the value has an associated `type` key in the extra options, we apply it to the read value, 'lazily' recreating a `structured@1.0` form. ### ensure_loaded/2 ### `ensure_loaded(Lk, RawOpts) -> any()` ### link/3 ### `link(Existing, New, Opts) -> any()` Make a link from one path to another in the store. Note: Argument order is `link(Src, Dst, Opts)`. ### list/2 ### `list(Path, Opts) -> any()` List all items under a given path. ### list_numbered/2 ### `list_numbered(Path, Opts) -> any()` List all items in a directory, assuming they are numbered. ### prepare_commitments/2 * ### `prepare_commitments(RawCommitments, Opts) -> any()` The `structured@1.0` encoder does not typically encode `commitments`, subsequently, when we encounter a commitments message we prepare its contents separately, then write each to the store. ### prepare_links/4 * ### `prepare_links(RootPath, Subpaths, Store, Opts) -> any()` Prepare a set of links from a listing of subpaths. ### read/2 ### `read(Path, Opts) -> any()` Read the message at a path. Returns in `structured@1.0` format: Either a richly typed map or a direct binary. ### read_ao_types/4 * ### `read_ao_types(Path, Subpaths, Store, Opts) -> any()` Read and parse the ao-types for a given path if it is in the supplied list of subpaths, returning a map of keys and their types. ### read_resolved/3 ### `read_resolved(MsgID1, MsgID2, Opts) -> any()` Read the output of a prior computation, given Msg1, Msg2, and some options. ### run_test/0 * ### `run_test() -> any()` ### store_read/3 * ### `store_read(Path, Store, Opts) -> any()` List all of the subpaths of a given path and return a map of keys and links to the subpaths, including their types. ### test_deeply_nested_complex_message/1 * ### `test_deeply_nested_complex_message(Store) -> any()` Test deeply nested item storage and retrieval ### test_device_map_cannot_be_written_test/0 * ### `test_device_map_cannot_be_written_test() -> any()` Test that message whose device is `#{}` cannot be written. If it were to be written, it would cause an infinite loop. ### test_message_with_list/1 * ### `test_message_with_list(Store) -> any()` ### test_signed/1 ### `test_signed(Data) -> any()` ### test_signed/2 * ### `test_signed(Data, Wallet) -> any()` ### test_store_ans104_message/1 * ### `test_store_ans104_message(Store) -> any()` ### test_store_binary/1 * ### `test_store_binary(Store) -> any()` ### test_store_simple_signed_message/1 * ### `test_store_simple_signed_message(Store) -> any()` Test storing and retrieving a simple unsigned item ### test_store_simple_unsigned_message/1 * ### `test_store_simple_unsigned_message(Store) -> any()` Test storing and retrieving a simple unsigned item ### test_store_unsigned_empty_message/1 * ### `test_store_unsigned_empty_message(Store) -> any()` ### test_store_unsigned_nested_empty_message/1 * ### `test_store_unsigned_nested_empty_message(Store) -> any()` ### test_unsigned/1 ### `test_unsigned(Data) -> any()` ### to_integer/1 * ### `to_integer(Value) -> any()` ### types_to_implicit/1 * ### `types_to_implicit(Types) -> any()` Convert a map of ao-types to an implicit map of types. ### write/2 ### `write(RawMsg, Opts) -> any()` Write a message to the cache. For raw binaries, we write the data at the hashpath of the data (by default the SHA2-256 hash of the data). We link the unattended ID's hashpath for the keys (including `/commitments`) on the message to the underlying data and recurse. We then link each commitment ID to the uncommitted message, such that any of the committed or uncommitted IDs can be read, and once in memory all of the commitments are available. For deep messages, the commitments will also be read, such that the ID of the outer message (which does not include its commitments) will be built upon the commitments of the inner messages. We do not, however, store the IDs from commitments on signed _inner_ messages. We may wish to revisit this. ### write_binary/3 ### `write_binary(Hashpath, Bin, Opts) -> any()` Write a raw binary keys into the store and link it at a given hashpath. ### write_binary/4 * ### `write_binary(Hashpath, Bin, Store, Opts) -> any()` ### write_hashpath/2 ### `write_hashpath(Msg, Opts) -> any()` Write a hashpath and its message to the store and link it. ### write_hashpath/3 * ### `write_hashpath(HP, Msg, Opts) -> any()` ### write_key/6 * ### `write_key(Base, Key, HPAlg, RawCommitments, Store, Opts) -> any()` Write a single key for a message into the store. --- END OF FILE: docs/resources/source-code/hb_cache.md --- --- START OF FILE: docs/resources/source-code/hb_client.md --- # [Module hb_client.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_client.erl) ## Function Index ##
add_route/3
arweave_timestamp/0Grab the latest block information from the Arweave gateway node.
prefix_keys/3*
resolve/4Resolve a message pair on a remote node.
routes/2
upload/2Upload a data item to the bundler node.
upload/3
upload_empty_message_test/0*
upload_empty_raw_ans104_test/0*
upload_raw_ans104_test/0*
upload_raw_ans104_with_anchor_test/0*
upload_single_layer_message_test/0*
## Function Details ## ### add_route/3 ### `add_route(Node, Route, Opts) -> any()` ### arweave_timestamp/0 ### `arweave_timestamp() -> any()` Grab the latest block information from the Arweave gateway node. ### prefix_keys/3 * ### `prefix_keys(Prefix, Message, Opts) -> any()` ### resolve/4 ### `resolve(Node, Msg1, Msg2, Opts) -> any()` Resolve a message pair on a remote node. The message pair is first transformed into a singleton request, by prefixing the keys in both messages for the path segment that they relate to, and then adjusting the "Path" field from the second message. ### routes/2 ### `routes(Node, Opts) -> any()` ### upload/2 ### `upload(Msg, Opts) -> any()` Upload a data item to the bundler node. Note: Uploads once per commitment device. Callers should filter the commitments to only include the ones they are interested in, if this is not the desired behavior. ### upload/3 ### `upload(Msg, Opts, X3) -> any()` ### upload_empty_message_test/0 * ### `upload_empty_message_test() -> any()` ### upload_empty_raw_ans104_test/0 * ### `upload_empty_raw_ans104_test() -> any()` ### upload_raw_ans104_test/0 * ### `upload_raw_ans104_test() -> any()` ### upload_raw_ans104_with_anchor_test/0 * ### `upload_raw_ans104_with_anchor_test() -> any()` ### upload_single_layer_message_test/0 * ### `upload_single_layer_message_test() -> any()` --- END OF FILE: docs/resources/source-code/hb_client.md --- --- START OF FILE: docs/resources/source-code/hb_crypto.md --- # [Module hb_crypto.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_crypto.erl) Implements the cryptographic functions and wraps the primitives used in HyperBEAM. ## Description ## Abstracted such that this (extremely!) dangerous code can be carefully managed. HyperBEAM currently implements two hashpath algorithms: * `sha-256-chain`: A simple chained SHA-256 hash. * `accumulate-256`: A SHA-256 hash that chains the given IDs and accumulates their values into a single commitment. The accumulate algorithm is experimental and at this point only exists to allow us to test multiple HashPath algorithms in HyperBEAM. ## Function Index ##
accumulate/1Accumulate two IDs, or a list of IDs, into a single commitment.
accumulate/2
count_zeroes/1*Count the number of leading zeroes in a bitstring.
sha256/1Wrap Erlang's crypto:hash/2 to provide a standard interface.
sha256_chain/2Add a new ID to the end of a SHA-256 hash chain.
sha256_chain_test/0*Check that sha-256-chain correctly produces a hash matching the machine's OpenSSL lib's output.
## Function Details ## ### accumulate/1 ### `accumulate(IDs) -> any()` Accumulate two IDs, or a list of IDs, into a single commitment. This function requires that the IDs given are already cryptographically-secure, 256-bit values. No further cryptographic operations are performed upon the values, they are simply added together. This is useful in situations where the ordering of the IDs is not important, or explicitly detrimental to the utility of the final commitment. No ordering information is preserved in the final commitment. ### accumulate/2 ### `accumulate(ID1, ID2) -> any()` ### count_zeroes/1 * ### `count_zeroes(X1) -> any()` Count the number of leading zeroes in a bitstring. ### sha256/1 ### `sha256(Data) -> any()` Wrap Erlang's `crypto:hash/2` to provide a standard interface. Under-the-hood, this uses OpenSSL. ### sha256_chain/2 ### `sha256_chain(ID1, ID2) -> any()` Add a new ID to the end of a SHA-256 hash chain. ### sha256_chain_test/0 * ### `sha256_chain_test() -> any()` Check that `sha-256-chain` correctly produces a hash matching the machine's OpenSSL lib's output. Further (in case of a bug in our or Erlang's usage of OpenSSL), check that the output has at least has a high level of entropy. --- END OF FILE: docs/resources/source-code/hb_crypto.md --- --- START OF FILE: docs/resources/source-code/hb_debugger.md --- # [Module hb_debugger.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_debugger.erl) A module that provides bootstrapping interfaces for external debuggers to connect to HyperBEAM. ## Description ## The simplest way to utilize an external graphical debugger is to use the `erlang-ls` extension for VS Code, Emacs, or other Language Server Protocol (LSP) compatible editors. This repository contains a `launch.json` configuration file for VS Code that can be used to spawn a new HyperBEAM, attach the debugger to it, and execute the specified `Module:Function(Args)`. Additionally, the node can be started with `rebar3 debugging` in order to allow access to the console while also allowing the debugger to attach. Boot time is approximately 10 seconds. ## Function Index ##
await_breakpoint/0Await a new breakpoint being set by the debugger.
await_breakpoint/1*
await_debugger/0*Await a debugger to be attached to the node.
await_debugger/1*
interpret/1*Attempt to interpret a specified module to load it into the debugger.
interpret_modules/1*Interpret modules from a list of atom prefixes.
is_debugging_node_connected/0*Is another Distributed Erlang node connected to us?.
start/0
start_and_break/2A bootstrapping function to wait for an external debugger to be attached, then add a breakpoint on the specified Module:Function(Args), then call it.
start_and_break/3
start_and_break/4
## Function Details ## ### await_breakpoint/0 ### `await_breakpoint() -> any()` Await a new breakpoint being set by the debugger. ### await_breakpoint/1 * ### `await_breakpoint(N) -> any()` ### await_debugger/0 * ### `await_debugger() -> any()` Await a debugger to be attached to the node. ### await_debugger/1 * ### `await_debugger(N) -> any()` ### interpret/1 * ### `interpret(Module) -> any()` Attempt to interpret a specified module to load it into the debugger. `int:i/1` seems to have an issue that will cause it to fail sporadically with `error:undef` on some modules. This error appears not to be catchable through the normal means. Subsequently, we attempt the load in a separate process and wait for it to complete. If we do not receive a response in a reasonable amount of time, we assume that the module failed to load and return `false`. ### interpret_modules/1 * ### `interpret_modules(Prefixes) -> any()` Interpret modules from a list of atom prefixes. ### is_debugging_node_connected/0 * ### `is_debugging_node_connected() -> any()` Is another Distributed Erlang node connected to us? ### start/0 ### `start() -> any()` ### start_and_break/2 ### `start_and_break(Module, Function) -> any()` A bootstrapping function to wait for an external debugger to be attached, then add a breakpoint on the specified `Module:Function(Args)`, then call it. ### start_and_break/3 ### `start_and_break(Module, Function, Args) -> any()` ### start_and_break/4 ### `start_and_break(Module, Function, Args, DebuggerScope) -> any()` --- END OF FILE: docs/resources/source-code/hb_debugger.md --- --- START OF FILE: docs/resources/source-code/hb_escape.md --- # [Module hb_escape.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_escape.erl) Escape and unescape mixed case values for use in HTTP headers. ## Description ## This is necessary for encodings of AO-Core messages for transmission in HTTP/2 and HTTP/3, because uppercase header keys are explicitly disallowed. While most map keys in HyperBEAM are normalized to lowercase, IDs are not. Subsequently, we encode all header keys to lowercase %-encoded URI-style strings because transmission. ## Function Index ##
decode/1Decode a URI-encoded string back to a binary.
decode_keys/2Return a message with all of its keys decoded.
encode/1Encode a binary as a URI-encoded string.
encode_keys/2URI encode keys in the base layer of a message.
escape_byte/1*Escape a single byte as a URI-encoded string.
escape_unescape_identity_test/0*
escape_unescape_special_chars_test/0*
hex_digit/1*
hex_value/1*
percent_escape/1*Escape a list of characters as a URI-encoded string.
percent_unescape/1*Unescape a URI-encoded string.
unescape_specific_test/0*
uppercase_test/0*
## Function Details ## ### decode/1 ### `decode(Bin) -> any()` Decode a URI-encoded string back to a binary. ### decode_keys/2 ### `decode_keys(Msg, Opts) -> any()` Return a message with all of its keys decoded. ### encode/1 ### `encode(Bin) -> any()` Encode a binary as a URI-encoded string. ### encode_keys/2 ### `encode_keys(Msg, Opts) -> any()` URI encode keys in the base layer of a message. Does not recurse. ### escape_byte/1 * ### `escape_byte(C) -> any()` Escape a single byte as a URI-encoded string. ### escape_unescape_identity_test/0 * ### `escape_unescape_identity_test() -> any()` ### escape_unescape_special_chars_test/0 * ### `escape_unescape_special_chars_test() -> any()` ### hex_digit/1 * ### `hex_digit(N) -> any()` ### hex_value/1 * ### `hex_value(C) -> any()` ### percent_escape/1 * ### `percent_escape(Cs) -> any()` Escape a list of characters as a URI-encoded string. ### percent_unescape/1 * ### `percent_unescape(Cs) -> any()` Unescape a URI-encoded string. ### unescape_specific_test/0 * ### `unescape_specific_test() -> any()` ### uppercase_test/0 * ### `uppercase_test() -> any()` --- END OF FILE: docs/resources/source-code/hb_escape.md --- --- START OF FILE: docs/resources/source-code/hb_event.md --- # [Module hb_event.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_event.erl) Wrapper for incrementing prometheus counters. ## Function Index ##
await_prometheus_started/0*Delay the event server until prometheus is started.
counters/0Return a message containing the current counter values for all logged HyperBEAM events.
handle_events/0*
handle_tracer/3*
increment/3Increment the counter for the given topic and message.
increment/4
log/1Debugging log logging function.
log/2
log/3
log/4
log/5
log/6
parse_name/1*
raw_counters/0*
server/0*
## Function Details ## ### await_prometheus_started/0 * ### `await_prometheus_started() -> any()` Delay the event server until prometheus is started. ### counters/0 ### `counters() -> any()` Return a message containing the current counter values for all logged HyperBEAM events. The result comes in a form as follows: /GroupName/EventName -> Count Where the `EventName` is derived from the value of the first term sent to the `?event(...)` macros. ### handle_events/0 * ### `handle_events() -> any()` ### handle_tracer/3 * ### `handle_tracer(Topic, X, Opts) -> any()` ### increment/3 ### `increment(Topic, Message, Opts) -> any()` Increment the counter for the given topic and message. Registers the counter if it doesn't exist. If the topic is `global`, the message is ignored. This means that events must specify a topic if they want to be counted, filtering debug messages. Similarly, events with a topic that begins with `debug` are ignored. ### increment/4 ### `increment(Topic, Message, Opts, Count) -> any()` ### log/1 ### `log(X) -> any()` Debugging log logging function. For now, it just prints to standard error. ### log/2 ### `log(Topic, X) -> any()` ### log/3 ### `log(Topic, X, Mod) -> any()` ### log/4 ### `log(Topic, X, Mod, Func) -> any()` ### log/5 ### `log(Topic, X, Mod, Func, Line) -> any()` ### log/6 ### `log(Topic, X, Mod, Func, Line, Opts) -> any()` ### parse_name/1 * ### `parse_name(Name) -> any()` ### raw_counters/0 * ### `raw_counters() -> any()` ### server/0 * ### `server() -> any()` --- END OF FILE: docs/resources/source-code/hb_event.md --- --- START OF FILE: docs/resources/source-code/hb_examples.md --- # [Module hb_examples.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_examples.erl) This module contains end-to-end tests for Hyperbeam, accessing through the HTTP interface. ## Description ## As well as testing the system, you can use these tests as examples of how to interact with HyperBEAM nodes. ## Function Index ##
create_schedule_aos2_test_disabled/0*
paid_wasm/0*
paid_wasm_test_/0*Gain signed WASM responses from a node and verify them.
relay_with_payments_test/0*
relay_with_payments_test_/0*Start a node running the simple pay meta device, and use it to relay a message for a client.
schedule/2*
schedule/3*
schedule/4*
## Function Details ## ### create_schedule_aos2_test_disabled/0 * ### `create_schedule_aos2_test_disabled() -> any()` ### paid_wasm/0 * ### `paid_wasm() -> any()` ### paid_wasm_test_/0 * ### `paid_wasm_test_() -> any()` Gain signed WASM responses from a node and verify them. 1. Start the client with a small balance. 2. Execute a simple WASM function on the host node. 3. Verify the response is correct and signed by the host node. 4. Get the balance of the client and verify it has been deducted. ### relay_with_payments_test/0 * ### `relay_with_payments_test() -> any()` ### relay_with_payments_test_/0 * ### `relay_with_payments_test_() -> any()` Start a node running the simple pay meta device, and use it to relay a message for a client. We must ensure: 1. When the client has no balance, the relay fails. 2. The operator is able to topup for the client. 3. The client has the correct balance after the topup. 4. The relay succeeds when the client has enough balance. 5. The received message is signed by the host using http-sig and validates correctly. ### schedule/2 * ### `schedule(ProcMsg, Target) -> any()` ### schedule/3 * ### `schedule(ProcMsg, Target, Wallet) -> any()` ### schedule/4 * ### `schedule(ProcMsg, Target, Wallet, Node) -> any()` --- END OF FILE: docs/resources/source-code/hb_examples.md --- --- START OF FILE: docs/resources/source-code/hb_features.md --- # [Module hb_features.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_features.erl) A module that exports a list of feature flags that the node supports using the `-ifdef` macro. ## Description ## As a consequence, this module acts as a proxy of information between the build system and the runtime execution environment. ## Function Index ##
all/0Returns a list of all feature flags that the node supports.
enabled/1Returns true if the feature flag is enabled.
genesis_wasm/0
http3/0
rocksdb/0
test/0
## Function Details ## ### all/0 ### `all() -> any()` Returns a list of all feature flags that the node supports. ### enabled/1 ### `enabled(Feature) -> any()` Returns true if the feature flag is enabled. ### genesis_wasm/0 ### `genesis_wasm() -> any()` ### http3/0 ### `http3() -> any()` ### rocksdb/0 ### `rocksdb() -> any()` ### test/0 ### `test() -> any()` --- END OF FILE: docs/resources/source-code/hb_features.md --- --- START OF FILE: docs/resources/source-code/hb_gateway_client.md --- # [Module hb_gateway_client.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_gateway_client.erl) Implementation of Arweave's GraphQL API to gain access to specific items of data stored on the network. ## Description ## This module must be used to get full HyperBEAM `structured@1.0` form messages from data items stored on the network, as Arweave gateways do not presently expose all necessary fields to retrieve this information outside of the GraphQL API. When gateways integrate serving in `httpsig@1.0` form, this module will be deprecated. ## Function Index ##
ans104_no_data_item_test/0*
ao_dataitem_test/0*Test optimistic index.
data/2Get the data associated with a transaction by its ID, using the node's Arweave gateway peers.
decode_id_or_null/1*
decode_or_null/1*
item_spec/0*Gives the fields of a transaction that are needed to construct an ANS-104 message.
l1_transaction_test/0*Test l1 message from graphql.
l2_dataitem_test/0*Test l2 message from graphql.
normalize_null/1*
query/2*Run a GraphQL request encoded as a binary.
read/2Get a data item (including data and tags) by its ID, using the node's GraphQL peers.
result_to_message/2Takes a GraphQL item node, matches it with the appropriate data from a gateway, then returns {ok, ParsedMsg}.
result_to_message/3*
scheduler_location/2Find the location of the scheduler based on its ID, through GraphQL.
scheduler_location_test/0*Test that we can get the scheduler location.
subindex_to_tags/1*Takes a list of messages with name and value fields, and formats them as a GraphQL tags argument.
## Function Details ## ### ans104_no_data_item_test/0 * ### `ans104_no_data_item_test() -> any()` ### ao_dataitem_test/0 * ### `ao_dataitem_test() -> any()` Test optimistic index ### data/2 ### `data(ID, Opts) -> any()` Get the data associated with a transaction by its ID, using the node's Arweave `gateway` peers. The item is expected to be available in its unmodified (by caches or other proxies) form at the following location: https:///raw/ where `<id>` is the base64-url-encoded transaction ID. ### decode_id_or_null/1 * ### `decode_id_or_null(Bin) -> any()` ### decode_or_null/1 * ### `decode_or_null(Bin) -> any()` ### item_spec/0 * ### `item_spec() -> any()` Gives the fields of a transaction that are needed to construct an ANS-104 message. ### l1_transaction_test/0 * ### `l1_transaction_test() -> any()` Test l1 message from graphql ### l2_dataitem_test/0 * ### `l2_dataitem_test() -> any()` Test l2 message from graphql ### normalize_null/1 * ### `normalize_null(Bin) -> any()` ### query/2 * ### `query(Query, Opts) -> any()` Run a GraphQL request encoded as a binary. The node message may contain a list of URLs to use, optionally as a tuple with an additional map of options to use for the request. ### read/2 ### `read(ID, Opts) -> any()` Get a data item (including data and tags) by its ID, using the node's GraphQL peers. It uses the following GraphQL schema: type Transaction { id: ID! anchor: String! signature: String! recipient: String! owner: Owner { address: String! key: String! }! fee: Amount! quantity: Amount! data: MetaData! tags: [Tag { name: String! value: String! }!]! } type Amount { winston: String! ar: String! } ### result_to_message/2 ### `result_to_message(Item, Opts) -> any()` Takes a GraphQL item node, matches it with the appropriate data from a gateway, then returns `{ok, ParsedMsg}`. ### result_to_message/3 * ### `result_to_message(ExpectedID, Item, Opts) -> any()` ### scheduler_location/2 ### `scheduler_location(Address, Opts) -> any()` Find the location of the scheduler based on its ID, through GraphQL. ### scheduler_location_test/0 * ### `scheduler_location_test() -> any()` Test that we can get the scheduler location. ### subindex_to_tags/1 * ### `subindex_to_tags(Subindex) -> any()` Takes a list of messages with `name` and `value` fields, and formats them as a GraphQL `tags` argument. --- END OF FILE: docs/resources/source-code/hb_gateway_client.md --- --- START OF FILE: docs/resources/source-code/hb_http_benchmark_tests.md --- # [Module hb_http_benchmark_tests.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_http_benchmark_tests.erl) --- END OF FILE: docs/resources/source-code/hb_http_benchmark_tests.md --- --- START OF FILE: docs/resources/source-code/hb_http_client_sup.md --- # [Module hb_http_client_sup.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_http_client_sup.erl) The supervisor for the gun HTTP client wrapper. __Behaviours:__ [`supervisor`](supervisor.md). ## Function Index ##
init/1
start_link/1
## Function Details ## ### init/1 ### `init(Opts) -> any()` ### start_link/1 ### `start_link(Opts) -> any()` --- END OF FILE: docs/resources/source-code/hb_http_client_sup.md --- --- START OF FILE: docs/resources/source-code/hb_http_client.md --- # [Module hb_http_client.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_http_client.erl) A wrapper library for gun. __Behaviours:__ [`gen_server`](gen_server.md). ## Description ## This module originates from the Arweave project, and has been modified for use in HyperBEAM. ## Function Index ##
await_response/2*
dec_prometheus_gauge/1*Safe wrapper for prometheus_gauge:dec/2.
download_metric/2*
get_status_class/1*Return the HTTP status class label for cowboy_requests_total and gun_requests_total metrics.
gun_req/3*
handle_call/3
handle_cast/2
handle_info/2
httpc_req/3*
inc_prometheus_counter/3*
inc_prometheus_gauge/1*Safe wrapper for prometheus_gauge:inc/2.
init/1
init_prometheus/1*
log/5*
maybe_invoke_monitor/2*Invoke the HTTP monitor message with AO-Core, if it is set in the node message key.
method_to_bin/1*
open_connection/2*
parse_peer/2*
record_duration/2*Record the duration of the request in an async process.
record_response_status/3*
reply_error/2*
req/2
req/3*
request/3*
start_link/1
terminate/2
upload_metric/1*
## Function Details ## ### await_response/2 * ### `await_response(Args, Opts) -> any()` ### dec_prometheus_gauge/1 * ### `dec_prometheus_gauge(Name) -> any()` Safe wrapper for prometheus_gauge:dec/2. ### download_metric/2 * ### `download_metric(Data, X2) -> any()` ### get_status_class/1 * ### `get_status_class(Data) -> any()` Return the HTTP status class label for cowboy_requests_total and gun_requests_total metrics. ### gun_req/3 * ### `gun_req(Args, ReestablishedConnection, Opts) -> any()` ### handle_call/3 ### `handle_call(Request, From, State) -> any()` ### handle_cast/2 ### `handle_cast(Cast, State) -> any()` ### handle_info/2 ### `handle_info(Message, State) -> any()` ### httpc_req/3 * ### `httpc_req(Args, X2, Opts) -> any()` ### inc_prometheus_counter/3 * ### `inc_prometheus_counter(Name, Labels, Value) -> any()` ### inc_prometheus_gauge/1 * ### `inc_prometheus_gauge(Name) -> any()` Safe wrapper for prometheus_gauge:inc/2. ### init/1 ### `init(Opts) -> any()` ### init_prometheus/1 * ### `init_prometheus(Opts) -> any()` ### log/5 * ### `log(Type, Event, X3, Reason, Opts) -> any()` ### maybe_invoke_monitor/2 * ### `maybe_invoke_monitor(Details, Opts) -> any()` Invoke the HTTP monitor message with AO-Core, if it is set in the node message key. We invoke the given message with the `body` set to a signed version of the details. This allows node operators to configure their machine to record duration statistics into customized data stores, computations, or processes etc. Additionally, we include the `http_reference` value, if set in the given `opts`. We use `hb_ao:get` rather than `hb_opts:get`, as settings configured by the `~router@1.0` route `opts` key are unable to generate atoms. ### method_to_bin/1 * ### `method_to_bin(X1) -> any()` ### open_connection/2 * ### `open_connection(X1, Opts) -> any()` ### parse_peer/2 * ### `parse_peer(Peer, Opts) -> any()` ### record_duration/2 * ### `record_duration(Details, Opts) -> any()` Record the duration of the request in an async process. We write the data to prometheus if the application is enabled, as well as invoking the `http_monitor` if appropriate. ### record_response_status/3 * ### `record_response_status(Method, Path, Response) -> any()` ### reply_error/2 * ### `reply_error(PendingRequests, Reason) -> any()` ### req/2 ### `req(Args, Opts) -> any()` ### req/3 * ### `req(Args, ReestablishedConnection, Opts) -> any()` ### request/3 * ### `request(PID, Args, Opts) -> any()` ### start_link/1 ### `start_link(Opts) -> any()` ### terminate/2 ### `terminate(Reason, State) -> any()` ### upload_metric/1 * ### `upload_metric(X1) -> any()` --- END OF FILE: docs/resources/source-code/hb_http_client.md --- --- START OF FILE: docs/resources/source-code/hb_http_server.md --- # [Module hb_http_server.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_http_server.erl) A router that attaches a HTTP server to the AO-Core resolver. ## Description ## Because AO-Core is built to speak in HTTP semantics, this module only has to marshal the HTTP request into a message, and then pass it to the AO-Core resolver. `hb_http:reply/4` is used to respond to the client, handling the process of converting a message back into an HTTP response. The router uses an `Opts` message as its Cowboy initial state, such that changing it on start of the router server allows for the execution parameters of all downstream requests to be controlled. ## Function Index ##
allowed_methods/2Return the list of allowed methods for the HTTP server.
cors_reply/2*Reply to CORS preflight requests.
get_opts/0Get the node message for the current process.
get_opts/1
handle_request/3*Handle all non-CORS preflight requests as AO-Core requests.
http3_conn_sup_loop/0*
init/2Entrypoint for all HTTP requests.
new_server/1*Trigger the creation of a new HTTP server node.
read_body/1*Helper to grab the full body of a HTTP request, even if it's chunked.
read_body/2*
set_default_opts/1Apply the default node message to the given opts map.
set_node_opts_test/0*Ensure that the start hook can be used to modify the node options.
set_opts/1Merges the provided Opts with uncommitted values from Request, preserves the http_server value, and updates node_history by prepending the Request.
set_opts/2
set_opts_test/0*Test the set_opts/2 function that merges request with options, manages node history, and updates server state.
set_proc_server_id/1Initialize the server ID for the current process.
start/0Starts the HTTP server.
start/1
start_http2/3*
start_http3/3*
start_node/0Test that we can start the server, send a message, and get a response.
start_node/1
## Function Details ## ### allowed_methods/2 ### `allowed_methods(Req, State) -> any()` Return the list of allowed methods for the HTTP server. ### cors_reply/2 * ### `cors_reply(Req, ServerID) -> any()` Reply to CORS preflight requests. ### get_opts/0 ### `get_opts() -> any()` Get the node message for the current process. ### get_opts/1 ### `get_opts(NodeMsg) -> any()` ### handle_request/3 * ### `handle_request(RawReq, Body, ServerID) -> any()` Handle all non-CORS preflight requests as AO-Core requests. Execution starts by parsing the HTTP request into HyerBEAM's message format, then passing the message directly to `meta@1.0` which handles calling AO-Core in the appropriate way. ### http3_conn_sup_loop/0 * ### `http3_conn_sup_loop() -> any()` ### init/2 ### `init(Req, ServerID) -> any()` Entrypoint for all HTTP requests. Receives the Cowboy request option and the server ID, which can be used to lookup the node message. ### new_server/1 * ### `new_server(RawNodeMsg) -> any()` Trigger the creation of a new HTTP server node. Accepts a `NodeMsg` message, which is used to configure the server. This function executed the `start` hook on the node, giving it the opportunity to modify the `NodeMsg` before it is used to configure the server. The `start` hook expects gives and expects the node message to be in the `body` key. ### read_body/1 * ### `read_body(Req) -> any()` Helper to grab the full body of a HTTP request, even if it's chunked. ### read_body/2 * ### `read_body(Req0, Acc) -> any()` ### set_default_opts/1 ### `set_default_opts(Opts) -> any()` Apply the default node message to the given opts map. ### set_node_opts_test/0 * ### `set_node_opts_test() -> any()` Ensure that the `start` hook can be used to modify the node options. We do this by creating a message with a device that has a `start` key. This key takes the message's body (the anticipated node options) and returns a modified version of that body, which will be used to configure the node. We then check that the node options were modified as we expected. ### set_opts/1 ### `set_opts(Opts) -> any()` Merges the provided `Opts` with uncommitted values from `Request`, preserves the http_server value, and updates node_history by prepending the `Request`. If a server reference exists, updates the Cowboy environment variable 'node_msg' with the resulting options map. ### set_opts/2 ### `set_opts(Request, Opts) -> any()` ### set_opts_test/0 * ### `set_opts_test() -> any()` Test the set_opts/2 function that merges request with options, manages node history, and updates server state. ### set_proc_server_id/1 ### `set_proc_server_id(ServerID) -> any()` Initialize the server ID for the current process. ### start/0 ### `start() -> any()` Starts the HTTP server. Optionally accepts an `Opts` message, which is used as the source for server configuration settings, as well as the `Opts` argument to use for all AO-Core resolution requests downstream. ### start/1 ### `start(Opts) -> any()` ### start_http2/3 * ### `start_http2(ServerID, ProtoOpts, NodeMsg) -> any()` ### start_http3/3 * ### `start_http3(ServerID, ProtoOpts, NodeMsg) -> any()` ### start_node/0 ### `start_node() -> any()` Test that we can start the server, send a message, and get a response. ### start_node/1 ### `start_node(Opts) -> any()` --- END OF FILE: docs/resources/source-code/hb_http_server.md --- --- START OF FILE: docs/resources/source-code/hb_http.md --- # [Module hb_http.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_http.erl) ## Function Index ##
accept_to_codec/2Calculate the codec name to use for a reply given its initiating Cowboy request, the parsed TABM request, and the response message.
add_cors_headers/3*Add permissive CORS headers to a message, if the message has not already specified CORS headers.
allowed_status/2*Check if a status is allowed, according to the configuration.
ans104_wasm_test/0*
codec_to_content_type/2*Call the content-type key on a message with the given codec, using a fast-path for options that are not needed for this one-time lookup.
cors_get_test/0*
default_codec/1*Return the default codec for the given options.
empty_inbox/1*Empty the inbox of the current process for all messages with the given reference.
encode_reply/4*Generate the headers and body for a HTTP response message.
get/2Gets a URL via HTTP and returns the resulting message in deserialized form.
get/3
get_deep_signed_wasm_state_test/0*
get_deep_unsigned_wasm_state_test/0*
http_response_to_httpsig/4*Convert a HTTP response to a httpsig message.
httpsig_to_tabm_singleton/3*HTTPSig messages are inherently mixed into the transport layer, so they require special handling in order to be converted to a normalized message.
index_request_test/0*
index_test/0*
message_to_request/2Given a message, return the information needed to make the request.
mime_to_codec/2*Find a codec name from a mime-type.
multirequest/5*Dispatch the same HTTP request to many nodes.
multirequest_opt/5*Get a value for a multirequest option from the config or message.
multirequest_opts/3*Get the multirequest options from the config or message.
nested_ao_resolve_test/0*
normalize_unsigned/3*Add the method and path to a message, if they are not already present.
parallel_multirequest/8*Dispatch the same HTTP request to many nodes in parallel.
parallel_responses/7*Collect the necessary number of responses, and stop workers if configured to do so.
post/3Posts a message to a URL on a remote peer via HTTP.
post/4
prepare_request/6*Turn a set of request arguments into a request message, formatted in the preferred format.
remove_unless_signed/3*Remove all keys from the message unless they are signed.
reply/4Reply to the client's HTTP request with a message.
reply/5*
req_to_tabm_singleton/3Convert a cowboy request to a normalized message.
request/2Posts a binary to a URL on a remote peer via HTTP, returning the raw binary body.
request/4
request/5
route_to_request/3*Parse a dev_router:route response and return a tuple of request parameters.
run_wasm_signed_test/0*
run_wasm_unsigned_test/0*
send_large_signed_request_test/0*
serial_multirequest/7*Serially request a message, collecting responses until the required number of responses have been gathered.
simple_ao_resolve_signed_test/0*
simple_ao_resolve_unsigned_test/0*
start/0
wasm_compute_request/3*
wasm_compute_request/4*
## Function Details ## ### accept_to_codec/2 ### `accept_to_codec(TABMReq, Opts) -> any()` Calculate the codec name to use for a reply given its initiating Cowboy request, the parsed TABM request, and the response message. The precidence order for finding the codec is: 1. The `accept-codec` field in the message 2. The `accept` field in the request headers 3. The default codec Options can be specified in mime-type format (`application/*`) or in AO device format (`device@1.0`). ### add_cors_headers/3 * ### `add_cors_headers(Msg, ReqHdr, Opts) -> any()` Add permissive CORS headers to a message, if the message has not already specified CORS headers. ### allowed_status/2 * ### `allowed_status(ResponseMsg, Statuses) -> any()` Check if a status is allowed, according to the configuration. ### ans104_wasm_test/0 * ### `ans104_wasm_test() -> any()` ### codec_to_content_type/2 * ### `codec_to_content_type(Codec, Opts) -> any()` Call the `content-type` key on a message with the given codec, using a fast-path for options that are not needed for this one-time lookup. ### cors_get_test/0 * ### `cors_get_test() -> any()` ### default_codec/1 * ### `default_codec(Opts) -> any()` Return the default codec for the given options. ### empty_inbox/1 * ### `empty_inbox(Ref) -> any()` Empty the inbox of the current process for all messages with the given reference. ### encode_reply/4 * ### `encode_reply(Status, TABMReq, Message, Opts) -> any()` Generate the headers and body for a HTTP response message. ### get/2 ### `get(Node, Opts) -> any()` Gets a URL via HTTP and returns the resulting message in deserialized form. ### get/3 ### `get(Node, PathBin, Opts) -> any()` ### get_deep_signed_wasm_state_test/0 * ### `get_deep_signed_wasm_state_test() -> any()` ### get_deep_unsigned_wasm_state_test/0 * ### `get_deep_unsigned_wasm_state_test() -> any()` ### http_response_to_httpsig/4 * ### `http_response_to_httpsig(Status, HeaderMap, Body, Opts) -> any()` Convert a HTTP response to a httpsig message. ### httpsig_to_tabm_singleton/3 * ### `httpsig_to_tabm_singleton(Req, Body, Opts) -> any()` HTTPSig messages are inherently mixed into the transport layer, so they require special handling in order to be converted to a normalized message. In particular, the signatures are verified if present and required by the node configuration. Additionally, non-committed fields are removed from the message if it is signed, with the exception of the `path` and `method` fields. ### index_request_test/0 * ### `index_request_test() -> any()` ### index_test/0 * ### `index_test() -> any()` ### message_to_request/2 ### `message_to_request(M, Opts) -> any()` Given a message, return the information needed to make the request. ### mime_to_codec/2 * ### `mime_to_codec(X1, Opts) -> any()` Find a codec name from a mime-type. ### multirequest/5 * ### `multirequest(Config, Method, Path, Message, Opts) -> any()` Dispatch the same HTTP request to many nodes. Can be configured to await responses from all nodes or just one, and to halt all requests after after it has received the required number of responses, or to leave all requests running until they have all completed. Default: Race for first response. Expects a config message of the following form: /Nodes/1..n: Hostname | #{ hostname => Hostname, address => Address } /Responses: Number of responses to gather /Stop-After: Should we stop after the required number of responses? /Parallel: Should we run the requests in parallel? ### multirequest_opt/5 * ### `multirequest_opt(Key, Config, Message, Default, Opts) -> any()` Get a value for a multirequest option from the config or message. ### multirequest_opts/3 * ### `multirequest_opts(Config, Message, Opts) -> any()` Get the multirequest options from the config or message. The options in the message take precidence over the options in the config. ### nested_ao_resolve_test/0 * ### `nested_ao_resolve_test() -> any()` ### normalize_unsigned/3 * ### `normalize_unsigned(Req, Msg, Opts) -> any()` Add the method and path to a message, if they are not already present. Remove browser-added fields that are unhelpful during processing (for example, `content-length`). The precidence order for finding the path is: 1. The path in the message 2. The path in the request URI ### parallel_multirequest/8 * ### `parallel_multirequest(Nodes, Responses, StopAfter, Method, Path, Message, Statuses, Opts) -> any()` Dispatch the same HTTP request to many nodes in parallel. ### parallel_responses/7 * ### `parallel_responses(Res, Procs, Ref, Awaiting, StopAfter, Statuses, Opts) -> any()` Collect the necessary number of responses, and stop workers if configured to do so. ### post/3 ### `post(Node, Message, Opts) -> any()` Posts a message to a URL on a remote peer via HTTP. Returns the resulting message in deserialized form. ### post/4 ### `post(Node, Path, Message, Opts) -> any()` ### prepare_request/6 * ### `prepare_request(Format, Method, Peer, Path, RawMessage, Opts) -> any()` Turn a set of request arguments into a request message, formatted in the preferred format. This function honors the `accept-bundle` option, if it is already present in the message, and sets it to `true` if it is not. ### remove_unless_signed/3 * ### `remove_unless_signed(Key, Msg, Opts) -> any()` Remove all keys from the message unless they are signed. ### reply/4 ### `reply(Req, TABMReq, Message, Opts) -> any()` Reply to the client's HTTP request with a message. ### reply/5 * ### `reply(Req, TABMReq, BinStatus, RawMessage, Opts) -> any()` ### req_to_tabm_singleton/3 ### `req_to_tabm_singleton(Req, Body, Opts) -> any()` Convert a cowboy request to a normalized message. ### request/2 ### `request(Message, Opts) -> any()` Posts a binary to a URL on a remote peer via HTTP, returning the raw binary body. ### request/4 ### `request(Method, Peer, Path, Opts) -> any()` ### request/5 ### `request(Method, Config, Path, Message, Opts) -> any()` ### route_to_request/3 * ### `route_to_request(M, X2, Opts) -> any()` Parse a `dev_router:route` response and return a tuple of request parameters. ### run_wasm_signed_test/0 * ### `run_wasm_signed_test() -> any()` ### run_wasm_unsigned_test/0 * ### `run_wasm_unsigned_test() -> any()` ### send_large_signed_request_test/0 * ### `send_large_signed_request_test() -> any()` ### serial_multirequest/7 * ### `serial_multirequest(Nodes, Remaining, Method, Path, Message, Statuses, Opts) -> any()` Serially request a message, collecting responses until the required number of responses have been gathered. Ensure that the statuses are allowed, according to the configuration. ### simple_ao_resolve_signed_test/0 * ### `simple_ao_resolve_signed_test() -> any()` ### simple_ao_resolve_unsigned_test/0 * ### `simple_ao_resolve_unsigned_test() -> any()` ### start/0 ### `start() -> any()` ### wasm_compute_request/3 * ### `wasm_compute_request(ImageFile, Func, Params) -> any()` ### wasm_compute_request/4 * ### `wasm_compute_request(ImageFile, Func, Params, ResultPath) -> any()` --- END OF FILE: docs/resources/source-code/hb_http.md --- --- START OF FILE: docs/resources/source-code/hb_json.md --- # [Module hb_json.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_json.erl) Wrapper for encoding and decoding JSON. ## Description ## Supports maps and Jiffy's old `ejson` format. This module abstracts the underlying JSON library, allowing us to switch between libraries as needed in the future. ## Function Index ##
decode/1Takes a JSON string and decodes it into an Erlang term.
decode/2
encode/1Takes a term in Erlang's native form and encodes it as a JSON string.
## Function Details ## ### decode/1 ### `decode(Bin) -> any()` Takes a JSON string and decodes it into an Erlang term. ### decode/2 ### `decode(Bin, Opts) -> any()` ### encode/1 ### `encode(Term) -> any()` Takes a term in Erlang's native form and encodes it as a JSON string. --- END OF FILE: docs/resources/source-code/hb_json.md --- --- START OF FILE: docs/resources/source-code/hb_keccak.md --- # [Module hb_keccak.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_keccak.erl) ## Function Index ##
hash_to_checksum_address/2*
init/0*
keccak_256/1
keccak_256_key_test/0*
keccak_256_key_to_address_test/0*
keccak_256_test/0*
key_to_ethereum_address/1
sha3_256/1
sha3_256_test/0*
to_hex/1*
## Function Details ## ### hash_to_checksum_address/2 * ### `hash_to_checksum_address(Last40, Hash) -> any()` ### init/0 * ### `init() -> any()` ### keccak_256/1 ### `keccak_256(Bin) -> any()` ### keccak_256_key_test/0 * ### `keccak_256_key_test() -> any()` ### keccak_256_key_to_address_test/0 * ### `keccak_256_key_to_address_test() -> any()` ### keccak_256_test/0 * ### `keccak_256_test() -> any()` ### key_to_ethereum_address/1 ### `key_to_ethereum_address(Key) -> any()` ### sha3_256/1 ### `sha3_256(Bin) -> any()` ### sha3_256_test/0 * ### `sha3_256_test() -> any()` ### to_hex/1 * ### `to_hex(Bin) -> any()` --- END OF FILE: docs/resources/source-code/hb_keccak.md --- --- START OF FILE: docs/resources/source-code/hb_link.md --- # [Module hb_link.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_link.erl) Utility functions for working with links. ## Function Index ##
decode_all_links/1Decode links embedded in the headers of a message.
format/1Format a link as a short string suitable for printing.
format/2
format_unresolved/1*Format a link without resolving it.
is_link_key/1Determine if a key is an encoded link.
normalize/2Takes a message and ensures that it is normalized:.
normalize/3
offload_linked_message_test/0*
offload_list_test/0*
read/1Read a link into memory.
read/2
remove_link_specifier/1Remove any +link suffixes from a key.
## Function Details ## ### decode_all_links/1 ### `decode_all_links(Msg) -> any()` Decode links embedded in the headers of a message. ### format/1 ### `format(Link) -> any()` Format a link as a short string suitable for printing. Checks the node options (optionally) given, to see if it should resolve the link to a value before printing. ### format/2 ### `format(Link, Opts) -> any()` ### format_unresolved/1 * ### `format_unresolved(X1) -> any()` Format a link without resolving it. ### is_link_key/1 ### `is_link_key(Key) -> any()` Determine if a key is an encoded link. ### normalize/2 ### `normalize(Msg, Opts) -> any()` Takes a message and ensures that it is normalized: - All literal (binary) lazily-loadable values are in-memory. - All submaps are represented as links, optionally offloading their local values to the cache. - All other values are left unchanged (including their potential types). The response is a non-recursive, fully loaded message. It may still contain types, but all submessages are guaranteed to be linkified. This stands in contrast to `linkify`, which takes a structured message and returns a message with structured links. ### normalize/3 ### `normalize(Msg, Mode, Opts) -> any()` ### offload_linked_message_test/0 * ### `offload_linked_message_test() -> any()` ### offload_list_test/0 * ### `offload_list_test() -> any()` ### read/1 ### `read(Link) -> any()` Read a link into memory. Uses `hb_cache:ensure_loaded/2` under-the-hood. ### read/2 ### `read(Link, Opts) -> any()` ### remove_link_specifier/1 ### `remove_link_specifier(Key) -> any()` Remove any `+link` suffixes from a key. --- END OF FILE: docs/resources/source-code/hb_link.md --- --- START OF FILE: docs/resources/source-code/hb_logger.md --- # [Module hb_logger.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_logger.erl) ## Function Index ##
console/2*
log/2
loop/1*
register/1
report/1
start/0
start/1
## Function Details ## ### console/2 * ### `console(State, Act) -> any()` ### log/2 ### `log(Monitor, Data) -> any()` ### loop/1 * ### `loop(State) -> any()` ### register/1 ### `register(Monitor) -> any()` ### report/1 ### `report(Monitor) -> any()` ### start/0 ### `start() -> any()` ### start/1 ### `start(Client) -> any()` --- END OF FILE: docs/resources/source-code/hb_logger.md --- --- START OF FILE: docs/resources/source-code/hb_maps.md --- # [Module hb_maps.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_maps.erl) An abstraction for working with maps in HyperBEAM, matching the generic `maps` module, but additionally supporting the resolution of links as they are encountered. ## Description ## These functions must be used extremely carefully. In virtually all circumstances, the `hb_ao:resolve/3` or `hb_ao:get/3` functions should be used instead, as they will execute the full AO-Core protocol upon requests (normalizing keys, applying the appropriate device's functions, as well as resolving links). By using this module's functions, you are implicitly making the assumption that the message in question is of the `~message@1.0` form, ignoring any other keys that its actual device may present. This module is intended for the extremely rare circumstances in which the additional overhead of the full AO-Core execution cycle is not acceptable, and the data in question is known to conform to the `~message@1.0` form. If you do not understand any/all of the above, you are in the wrong place! Utilise the `hb_ao` module and read the documentation therein, saving yourself from the inevitable issues that will arise from using this module without understanding the full implications. You have been warned. ## Function Index ##
filter/2
filter/3
filter_passively_loads_test/0*
filter_with_link_test/0*
filtermap/2
filtermap/3
filtermap_passively_loads_test/0*
filtermap_with_link_test/0*
find/2
find/3
fold/3
fold/4
fold_with_typed_link_test/0*
from_list/1
get/2
get/3
get/4Get a value from a map, resolving links as they are encountered in both the TABM encoded link format, as well as the structured type.
get_with_link_test/0*
get_with_typed_link_test/0*
is_key/2
is_key/3
keys/1
keys/2
map/2
map/3
map_with_link_test/0*
merge/2
merge/3
put/3
put/4
remove/2
remove/3
resolve_on_link_test/0*
size/1
size/2
take/2
take/3
to_list/1
to_list/2
update_with/3
update_with/4
values/1
values/2
with/2
with/3
without/2
without/3
## Function Details ## ### filter/2 ###

filter(Fun::fun((Key::term(), Value::term()) -> boolean()), Map::map()) -> map()

### filter/3 ###

filter(Fun::fun((Key::term(), Value::term()) -> boolean()), Map::map(), Opts::map()) -> map()

### filter_passively_loads_test/0 * ### `filter_passively_loads_test() -> any()` ### filter_with_link_test/0 * ### `filter_with_link_test() -> any()` ### filtermap/2 ###

filtermap(Fun::fun((Key::term(), Value::term()) -> {boolean(), term()}), Map::map()) -> map()

### filtermap/3 ###

filtermap(Fun::fun((Key::term(), Value::term()) -> {boolean(), term()}), Map::map(), Opts::map()) -> map()

### filtermap_passively_loads_test/0 * ### `filtermap_passively_loads_test() -> any()` ### filtermap_with_link_test/0 * ### `filtermap_with_link_test() -> any()` ### find/2 ###

find(Key::term(), Map::map()) -> {ok, term()} | error

### find/3 ###

find(Key::term(), Map::map(), Opts::map()) -> {ok, term()} | error

### fold/3 ###

fold(Fun::fun((Key::term(), Value::term(), Acc::term()) -> term()), Acc::term(), Map::map()) -> term()

### fold/4 ###

fold(Fun::fun((Key::term(), Value::term(), Acc::term()) -> term()), Acc::term(), Map::map(), Opts::map()) -> term()

### fold_with_typed_link_test/0 * ### `fold_with_typed_link_test() -> any()` ### from_list/1 ###

from_list(List::[{Key::term(), Value::term()}]) -> map()

### get/2 ###

get(Key::term(), Map::map()) -> term()

### get/3 ###

get(Key::term(), Map::map(), Default::term()) -> term()

### get/4 ###

get(Key::term(), Map::map(), Default::term(), Opts::map()) -> term()

Get a value from a map, resolving links as they are encountered in both the TABM encoded link format, as well as the structured type. ### get_with_link_test/0 * ### `get_with_link_test() -> any()` ### get_with_typed_link_test/0 * ### `get_with_typed_link_test() -> any()` ### is_key/2 ###

is_key(Key::term(), Map::map()) -> boolean()

### is_key/3 ###

is_key(Key::term(), Map::map(), Opts::map()) -> boolean()

### keys/1 ###

keys(Map::map()) -> [term()]

### keys/2 ###

keys(Map::map(), Opts::map()) -> [term()]

### map/2 ###

map(Fun::fun((Key::term(), Value::term()) -> term()), Map::map()) -> map()

### map/3 ###

map(Fun::fun((Key::term(), Value::term()) -> term()), Map::map(), Opts::map()) -> map()

### map_with_link_test/0 * ### `map_with_link_test() -> any()` ### merge/2 ###

merge(Map1::map(), Map2::map()) -> map()

### merge/3 ###

merge(Map1::map(), Map2::map(), Opts::map()) -> map()

### put/3 ###

put(Key::term(), Value::term(), Map::map()) -> map()

### put/4 ###

put(Key::term(), Value::term(), Map::map(), Opts::map()) -> map()

### remove/2 ###

remove(Key::term(), Map::map()) -> map()

### remove/3 ###

remove(Key::term(), Map::map(), Opts::map()) -> map()

### resolve_on_link_test/0 * ### `resolve_on_link_test() -> any()` ### size/1 ###

size(Map::map()) -> non_neg_integer()

### size/2 ###

size(Map::map(), Opts::map()) -> non_neg_integer()

### take/2 ###

take(N::non_neg_integer(), Map::map()) -> map()

### take/3 ###

take(N::non_neg_integer(), Map::map(), Opts::map()) -> map()

### to_list/1 ###

to_list(Map::map()) -> [{Key::term(), Value::term()}]

### to_list/2 ###

to_list(Map::map(), Opts::map()) -> [{Key::term(), Value::term()}]

### update_with/3 ###

update_with(Key::term(), Fun::fun((Value::term()) -> term()), Map::map()) -> map()

### update_with/4 ###

update_with(Key::term(), Fun::fun((Value::term()) -> term()), Map::map(), Opts::map()) -> map()

### values/1 ###

values(Map::map()) -> [term()]

### values/2 ###

values(Map::map(), Opts::map()) -> [term()]

### with/2 ###

with(Keys::[term()], Map::map()) -> map()

### with/3 ###

with(Keys::[term()], Map::map(), Opts::map()) -> map()

### without/2 ###

without(Keys::[term()], Map::map()) -> map()

### without/3 ###

without(Keys::[term()], Map::map(), Opts::map()) -> map()

--- END OF FILE: docs/resources/source-code/hb_maps.md --- --- START OF FILE: docs/resources/source-code/hb_message_test_vectors.md --- # [Module hb_message_test_vectors.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_message_test_vectors.erl) A battery of test vectors for message codecs, implementing the `message@1.0` encoding and commitment APIs. ## Description ## Additionally, this module houses tests that ensure the general functioning of the `hb_message` API. ## Function Index ##
basic_message_codec_test/2*
binary_to_binary_test/2*
bundled_and_unbundled_ids_differ_test/2*
bundled_ordering_test/2*Ensure that a httpsig@1.0 message which is bundled and requests an invalid ordering of keys is normalized to a valid ordering.
codec_roundtrip_conversion_is_idempotent_test/2*Ensure that converting a message to a codec, then back to TABM multiple times results in the same message being returned.
codec_test_suite/1*
committed_empty_keys_test/2*
committed_keys_test/2*
complex_signed_message_test/2*
deep_multisignature_test/0*
deep_typed_message_id_test/2*
deeply_nested_committed_keys_test/0*
deeply_nested_message_with_content_test/2*Test that we can convert a 3 layer nested message into a tx record and back.
deeply_nested_message_with_only_content/2*
default_keys_removed_test/0*Test that the filter_default_keys/1 function removes TX fields that have the default values found in the tx record, but not those that have been set by the user.
empty_body_test/2*
empty_string_in_nested_tag_test/2*
encode_balance_table/3*
encode_large_balance_table_test/2*
encode_small_balance_table_test/2*
find_multiple_commitments_test_disabled/0*
hashpath_sign_verify_test/2*
id_of_deep_message_and_link_message_match_test/2*
id_of_linked_message_test/2*
is_idempotent/3*Tests a message transforming function to ensure that it is idempotent.
large_body_committed_keys_test/2*
match_modes_test/0*
match_test/2*Test that the message matching function works.
message_with_large_keys_test/2*Test that the data field is correctly managed when we have multiple uses for it (the 'data' key itself, as well as keys that cannot fit in tags).
message_with_simple_embedded_list_test/2*
minimization_test/0*
nested_body_list_test/2*
nested_empty_map_test/2*
nested_message_with_large_content_test/2*Test that the data field is correctly managed when we have multiple uses for it (the 'data' key itself, as well as keys that cannot fit in tags).
nested_message_with_large_keys_and_content_test/2*Check that large keys and data fields are correctly handled together.
nested_message_with_large_keys_test/2*
nested_structured_fields_test/2*
priv_survives_conversion_test/2*
recursive_nested_list_test/2*
run_test/0*Test invocation function, making it easier to run a specific test.
set_body_codec_test/2*
sign_deep_message_from_lazy_cache_read_test/2*
sign_links_test/2*
sign_node_message_test/2*
signed_deep_message_test/2*
signed_list_test/2*
signed_message_encode_decode_verify_test/2*
signed_message_with_derived_components_test/2*
signed_nested_data_key_test/2*
signed_nested_message_with_child_test/2*
signed_non_bundle_is_bundlable_test/2*
signed_only_committed_data_field_test/2*
signed_with_inner_signed_message_test/2*
simple_nested_message_test/2*
simple_signed_nested_message_test/2*
single_layer_message_to_encoding_test/2*Test that we can convert a message into a tx record and back.
specific_order_deeply_nested_signed_message_test/2*
specific_order_signed_message_test/2*
structured_field_atom_parsing_test/2*Structured field parsing tests.
structured_field_decimal_parsing_test/2*
suite_name/1*Create a name for a suite from a codec spec.
suite_test_/0*Organizes a test battery for the hb_message module and its codecs.
suite_test_opts/0*Return a set of options for testing, taking the codec name as an argument.
suite_test_opts/1*
tabm_conversion_is_idempotent_test/2*Ensure that converting a message to/from TABM multiple times repeatedly does not alter the message's contents.
test_codecs/0*Return a list of codecs to test.
test_opts/1*
test_suite/0*
unsigned_id_test/2*
verify_nested_complex_signed_test/2*Check that a nested signed message with an embedded typed list can be further nested and signed.
## Function Details ## ### basic_message_codec_test/2 * ### `basic_message_codec_test(Codec, Opts) -> any()` ### binary_to_binary_test/2 * ### `binary_to_binary_test(Codec, Opts) -> any()` ### bundled_and_unbundled_ids_differ_test/2 * ### `bundled_and_unbundled_ids_differ_test(Codec, Opts) -> any()` ### bundled_ordering_test/2 * ### `bundled_ordering_test(Codec, Opts) -> any()` Ensure that a httpsig@1.0 message which is bundled and requests an invalid ordering of keys is normalized to a valid ordering. ### codec_roundtrip_conversion_is_idempotent_test/2 * ### `codec_roundtrip_conversion_is_idempotent_test(Codec, Opts) -> any()` Ensure that converting a message to a codec, then back to TABM multiple times results in the same message being returned. This test differs from its TABM form, as it shuttles (`to-from-to-...`), while the TABM test repeatedly encodes in a single direction (`to->to->...`). ### codec_test_suite/1 * ### `codec_test_suite(Codecs) -> any()` ### committed_empty_keys_test/2 * ### `committed_empty_keys_test(Codec, Opts) -> any()` ### committed_keys_test/2 * ### `committed_keys_test(Codec, Opts) -> any()` ### complex_signed_message_test/2 * ### `complex_signed_message_test(Codec, Opts) -> any()` ### deep_multisignature_test/0 * ### `deep_multisignature_test() -> any()` ### deep_typed_message_id_test/2 * ### `deep_typed_message_id_test(Codec, Opts) -> any()` ### deeply_nested_committed_keys_test/0 * ### `deeply_nested_committed_keys_test() -> any()` ### deeply_nested_message_with_content_test/2 * ### `deeply_nested_message_with_content_test(Codec, Opts) -> any()` Test that we can convert a 3 layer nested message into a tx record and back. ### deeply_nested_message_with_only_content/2 * ### `deeply_nested_message_with_only_content(Codec, Opts) -> any()` ### default_keys_removed_test/0 * ### `default_keys_removed_test() -> any()` Test that the filter_default_keys/1 function removes TX fields that have the default values found in the tx record, but not those that have been set by the user. ### empty_body_test/2 * ### `empty_body_test(Codec, Opts) -> any()` ### empty_string_in_nested_tag_test/2 * ### `empty_string_in_nested_tag_test(Codec, Opts) -> any()` ### encode_balance_table/3 * ### `encode_balance_table(Size, Codec, Opts) -> any()` ### encode_large_balance_table_test/2 * ### `encode_large_balance_table_test(Codec, Opts) -> any()` ### encode_small_balance_table_test/2 * ### `encode_small_balance_table_test(Codec, Opts) -> any()` ### find_multiple_commitments_test_disabled/0 * ### `find_multiple_commitments_test_disabled() -> any()` ### hashpath_sign_verify_test/2 * ### `hashpath_sign_verify_test(Codec, Opts) -> any()` ### id_of_deep_message_and_link_message_match_test/2 * ### `id_of_deep_message_and_link_message_match_test(Codec, Opts) -> any()` ### id_of_linked_message_test/2 * ### `id_of_linked_message_test(Codec, Opts) -> any()` ### is_idempotent/3 * ### `is_idempotent(Func, Msg, Opts) -> any()` Tests a message transforming function to ensure that it is idempotent. Runs the conversion a total of 3 times, ensuring that the result remains unchanged. This function takes transformation functions that result in `{ok, Res}`-form messages, as well as bare message results. ### large_body_committed_keys_test/2 * ### `large_body_committed_keys_test(Codec, Opts) -> any()` ### match_modes_test/0 * ### `match_modes_test() -> any()` ### match_test/2 * ### `match_test(Codec, Opts) -> any()` Test that the message matching function works. ### message_with_large_keys_test/2 * ### `message_with_large_keys_test(Codec, Opts) -> any()` Test that the data field is correctly managed when we have multiple uses for it (the 'data' key itself, as well as keys that cannot fit in tags). ### message_with_simple_embedded_list_test/2 * ### `message_with_simple_embedded_list_test(Codec, Opts) -> any()` ### minimization_test/0 * ### `minimization_test() -> any()` ### nested_body_list_test/2 * ### `nested_body_list_test(Codec, Opts) -> any()` ### nested_empty_map_test/2 * ### `nested_empty_map_test(Codec, Opts) -> any()` ### nested_message_with_large_content_test/2 * ### `nested_message_with_large_content_test(Codec, Opts) -> any()` Test that the data field is correctly managed when we have multiple uses for it (the 'data' key itself, as well as keys that cannot fit in tags). ### nested_message_with_large_keys_and_content_test/2 * ### `nested_message_with_large_keys_and_content_test(Codec, Opts) -> any()` Check that large keys and data fields are correctly handled together. ### nested_message_with_large_keys_test/2 * ### `nested_message_with_large_keys_test(Codec, Opts) -> any()` ### nested_structured_fields_test/2 * ### `nested_structured_fields_test(Codec, Opts) -> any()` ### priv_survives_conversion_test/2 * ### `priv_survives_conversion_test(Codec, Opts) -> any()` ### recursive_nested_list_test/2 * ### `recursive_nested_list_test(Codec, Opts) -> any()` ### run_test/0 * ### `run_test() -> any()` Test invocation function, making it easier to run a specific test. Disable/enable as needed. ### set_body_codec_test/2 * ### `set_body_codec_test(Codec, Opts) -> any()` ### sign_deep_message_from_lazy_cache_read_test/2 * ### `sign_deep_message_from_lazy_cache_read_test(Codec, Opts) -> any()` ### sign_links_test/2 * ### `sign_links_test(Codec, Opts) -> any()` ### sign_node_message_test/2 * ### `sign_node_message_test(Codec, Opts) -> any()` ### signed_deep_message_test/2 * ### `signed_deep_message_test(Codec, Opts) -> any()` ### signed_list_test/2 * ### `signed_list_test(Codec, Opts) -> any()` ### signed_message_encode_decode_verify_test/2 * ### `signed_message_encode_decode_verify_test(Codec, Opts) -> any()` ### signed_message_with_derived_components_test/2 * ### `signed_message_with_derived_components_test(Codec, Opts) -> any()` ### signed_nested_data_key_test/2 * ### `signed_nested_data_key_test(Codec, Opts) -> any()` ### signed_nested_message_with_child_test/2 * ### `signed_nested_message_with_child_test(Codec, Opts) -> any()` ### signed_non_bundle_is_bundlable_test/2 * ### `signed_non_bundle_is_bundlable_test(Codec, Opts) -> any()` ### signed_only_committed_data_field_test/2 * ### `signed_only_committed_data_field_test(Codec, Opts) -> any()` ### signed_with_inner_signed_message_test/2 * ### `signed_with_inner_signed_message_test(Codec, Opts) -> any()` ### simple_nested_message_test/2 * ### `simple_nested_message_test(Codec, Opts) -> any()` ### simple_signed_nested_message_test/2 * ### `simple_signed_nested_message_test(Codec, Opts) -> any()` ### single_layer_message_to_encoding_test/2 * ### `single_layer_message_to_encoding_test(Codec, Opts) -> any()` Test that we can convert a message into a tx record and back. ### specific_order_deeply_nested_signed_message_test/2 * ### `specific_order_deeply_nested_signed_message_test(RawCodec, Opts) -> any()` ### specific_order_signed_message_test/2 * ### `specific_order_signed_message_test(RawCodec, Opts) -> any()` ### structured_field_atom_parsing_test/2 * ### `structured_field_atom_parsing_test(Codec, Opts) -> any()` Structured field parsing tests. ### structured_field_decimal_parsing_test/2 * ### `structured_field_decimal_parsing_test(Codec, Opts) -> any()` ### suite_name/1 * ### `suite_name(CodecSpec) -> any()` Create a name for a suite from a codec spec. ### suite_test_/0 * ### `suite_test_() -> any()` Organizes a test battery for the `hb_message` module and its codecs. ### suite_test_opts/0 * ### `suite_test_opts() -> any()` Return a set of options for testing, taking the codec name as an argument. We do not presently use the codec name in the test, but we may wish to do so in the future. ### suite_test_opts/1 * ### `suite_test_opts(OptsName) -> any()` ### tabm_conversion_is_idempotent_test/2 * ### `tabm_conversion_is_idempotent_test(Codec, Opts) -> any()` Ensure that converting a message to/from TABM multiple times repeatedly does not alter the message's contents. ### test_codecs/0 * ### `test_codecs() -> any()` Return a list of codecs to test. Disable these as necessary if you need to test the functionality of a single codec, etc. ### test_opts/1 * ### `test_opts(X1) -> any()` ### test_suite/0 * ### `test_suite() -> any()` ### unsigned_id_test/2 * ### `unsigned_id_test(Codec, Opts) -> any()` ### verify_nested_complex_signed_test/2 * ### `verify_nested_complex_signed_test(Codec, Opts) -> any()` Check that a nested signed message with an embedded typed list can be further nested and signed. We then encode and decode the message. This tests a large portion of the complex type encodings that HyperBEAM uses together. --- END OF FILE: docs/resources/source-code/hb_message_test_vectors.md --- --- START OF FILE: docs/resources/source-code/hb_message.md --- # [Module hb_message.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_message.erl) This module acts an adapter between messages, as modeled in the AO-Core protocol, and their uderlying binary representations and formats. ## Description ## Unless you are implementing a new message serialization codec, you should not need to interact with this module directly. Instead, use the `hb_ao` interfaces to interact with all messages. The `dev_message` module implements a device interface for abstracting over the different message formats. `hb_message` and the HyperBEAM caches can interact with multiple different types of message formats: - Richly typed AO-Core structured messages. - Arweave transations. - ANS-104 data items. - HTTP Signed Messages. - Flat Maps. This module is responsible for converting between these formats. It does so by normalizing messages to a common format: `Type Annotated Binary Messages` (TABM). TABMs are deep Erlang maps with keys than only contain either other TABMs or binary values. By marshalling all messages into this format, they can easily be coerced into other output formats. For example, generating a `HTTP Signed Message` format output from an Arweave transaction. TABM is also a simple format from a computational perspective (only binary literals and O(1) access maps), such that operations upon them are efficient. The structure of the conversions is as follows:
Arweave TX/ANS-104 ==> dev_codec_ans104:from/1 ==> TABM
HTTP Signed Message ==> dev_codec_httpsig_conv:from/1 ==> TABM
Flat Maps ==> dev_codec_flat:from/1 ==> TABM

TABM ==> dev_codec_structured:to/1 ==> AO-Core Message
AO-Core Message ==> dev_codec_structured:from/1 ==> TABM

TABM ==> dev_codec_ans104:to/1 ==> Arweave TX/ANS-104
TABM ==> dev_codec_httpsig_conv:to/1 ==> HTTP Signed Message
TABM ==> dev_codec_flat:to/1 ==> Flat Maps
...
Additionally, this module provides a number of utility functions for manipulating messages. For example, `hb_message:sign/2` to sign a message of arbitrary type, or `hb_message:format/1` to print an AO-Core/TABM message in a human-readable format. The `hb_cache` module is responsible for storing and retrieving messages in the HyperBEAM stores configured on the node. Each store has its own storage backend, but each works with simple key-value pairs. Subsequently, the `hb_cache` module uses TABMs as the internal format for storing and retrieving messages. Test vectors to ensure the functioning of this module and the codecs that interact with it are found in `hb_message_test_vectors.erl`. ## Function Index ##
commit/2Sign a message with the given wallet.
commit/3
commitment/2Extract a commitment from a message given a committer ID, or a spec message to match against.
commitment/3
commitment_devices/2Return the devices for which there are commitments on a message.
committed/3Return the list of committed keys from a message.
conversion_spec_to_req/2*Get a codec device and request params from the given conversion request.
convert/3Convert a message from one format to another.
convert/4
default_tx_list/0Get the ordered list of fields as AO-Core keys and default values of the tx record.
default_tx_message/0*Get the normalized fields and default values of the tx record.
filter_default_keys/1Remove keys from a map that have the default values found in the tx record.
find_target/3Implements a standard pattern in which the target for an operation is found by looking for a target key in the request.
format/1Format a message for printing, optionally taking an indentation level to start from.
format/2
from_tabm/4*
id/1Return the ID of a message.
id/2
id/3
is_signed_key/3Determine whether a specific key is part of a message's commitments.
match/2Check if two maps match, including recursively checking nested maps.
match/3
match/4
matchable_keys/1*
minimize/1Remove keys from the map that can be regenerated.
minimize/2*
normalize/2*Return a map with only the keys that necessary, without those that can be regenerated.
print/1Pretty-print a message.
print/2*
restore_priv/3*Add the existing priv sub-map back to a converted message, honoring any existing priv sub-map that may already be present.
signers/2Return all of the committers on a message that have 'normal', 256 bit, addresses.
to_tabm/3*
type/1Return the type of an encoded message.
uncommitted/1Return the unsigned version of a message in AO-Core format.
uncommitted/2
unsafe_match/5*
verify/1wrapper function to verify a message.
verify/2
verify/3
with_commitments/3Filter messages that do not match the 'spec' given.
with_only_committed/2Return a message with only the committed keys.
with_only_committers/2Return the message with only the specified committers attached.
with_only_committers/3
without_commitments/3Filter messages that match the 'spec' given.
## Function Details ## ### commit/2 ### `commit(Msg, WalletOrOpts) -> any()` Sign a message with the given wallet. ### commit/3 ### `commit(Msg, Wallet, Format) -> any()` ### commitment/2 ### `commitment(Committer, Msg) -> any()` Extract a commitment from a message given a `committer` ID, or a spec message to match against. Returns only the first matching commitment, or `not_found`. ### commitment/3 ### `commitment(CommitterID, Msg, Opts) -> any()` ### commitment_devices/2 ### `commitment_devices(Msg, Opts) -> any()` Return the devices for which there are commitments on a message. ### committed/3 ### `committed(Msg, List, Opts) -> any()` Return the list of committed keys from a message. ### conversion_spec_to_req/2 * ### `conversion_spec_to_req(Spec, Opts) -> any()` Get a codec device and request params from the given conversion request. Expects conversion spec to either be a binary codec name, or a map with a `device` key and other parameters. Additionally honors the `always_bundle` key in the node message if present. ### convert/3 ### `convert(Msg, TargetFormat, Opts) -> any()` Convert a message from one format to another. Taking a message in the source format, a target format, and a set of opts. If not given, the source is assumed to be `structured@1.0`. Additional codecs can be added by ensuring they are part of the `Opts` map -- either globally, or locally for a computation. The encoding happens in two phases: 1. Convert the message to a TABM. 2. Convert the TABM to the target format. The conversion to a TABM is done by the `structured@1.0` codec, which is always available. The conversion from a TABM is done by the target codec. ### convert/4 ### `convert(Msg, TargetFormat, SourceFormat, Opts) -> any()` ### default_tx_list/0 ### `default_tx_list() -> any()` Get the ordered list of fields as AO-Core keys and default values of the tx record. ### default_tx_message/0 * ### `default_tx_message() -> any()` Get the normalized fields and default values of the tx record. ### filter_default_keys/1 ### `filter_default_keys(Map) -> any()` Remove keys from a map that have the default values found in the tx record. ### find_target/3 ### `find_target(Self, Req, Opts) -> any()` Implements a standard pattern in which the target for an operation is found by looking for a `target` key in the request. If the target is `self`, or not present, the operation is performed on the original message. Otherwise, the target is expected to be a key in the message, and the operation is performed on the value of that key. ### format/1 ### `format(Item) -> any()` Format a message for printing, optionally taking an indentation level to start from. ### format/2 ### `format(Bin, Indent) -> any()` ### from_tabm/4 * ### `from_tabm(Msg, TargetFormat, OldPriv, Opts) -> any()` ### id/1 ### `id(Msg) -> any()` Return the ID of a message. ### id/2 ### `id(Msg, Opts) -> any()` ### id/3 ### `id(Msg, RawCommitters, Opts) -> any()` ### is_signed_key/3 ### `is_signed_key(Key, Msg, Opts) -> any()` Determine whether a specific key is part of a message's commitments. ### match/2 ### `match(Map1, Map2) -> any()` Check if two maps match, including recursively checking nested maps. Takes an optional mode argument to control the matching behavior: `strict`: All keys in both maps be present and match. `only_present`: Only present keys in both maps must match. `primary`: Only the primary map's keys must be present. Returns `true` or `{ErrType, Err}`. ### match/3 ### `match(Map1, Map2, Mode) -> any()` ### match/4 ### `match(Map1, Map2, Mode, Opts) -> any()` ### matchable_keys/1 * ### `matchable_keys(Map) -> any()` ### minimize/1 ### `minimize(Msg) -> any()` Remove keys from the map that can be regenerated. Optionally takes an additional list of keys to include in the minimization. ### minimize/2 * ### `minimize(RawVal, ExtraKeys) -> any()` ### normalize/2 * ### `normalize(Map, Opts) -> any()` Return a map with only the keys that necessary, without those that can be regenerated. ### print/1 ### `print(Msg) -> any()` Pretty-print a message. ### print/2 * ### `print(Msg, Indent) -> any()` ### restore_priv/3 * ### `restore_priv(Msg, EmptyPriv, Opts) -> any()` Add the existing `priv` sub-map back to a converted message, honoring any existing `priv` sub-map that may already be present. ### signers/2 ### `signers(Msg, Opts) -> any()` Return all of the committers on a message that have 'normal', 256 bit, addresses. ### to_tabm/3 * ### `to_tabm(Msg, SourceFormat, Opts) -> any()` ### type/1 ### `type(TX) -> any()` Return the type of an encoded message. ### uncommitted/1 ### `uncommitted(Msg) -> any()` Return the unsigned version of a message in AO-Core format. ### uncommitted/2 ### `uncommitted(Bin, Opts) -> any()` ### unsafe_match/5 * ### `unsafe_match(Map1, Map2, Mode, Path, Opts) -> any()` ### verify/1 ### `verify(Msg) -> any()` wrapper function to verify a message. ### verify/2 ### `verify(Msg, Committers) -> any()` ### verify/3 ### `verify(Msg, Committers, Opts) -> any()` ### with_commitments/3 ### `with_commitments(Spec, Msg, Opts) -> any()` Filter messages that do not match the 'spec' given. The underlying match is performed in the `only_present` mode, such that match specifications only need to specify the keys that must be present. ### with_only_committed/2 ### `with_only_committed(Msg, Opts) -> any()` Return a message with only the committed keys. If no commitments are present, the message is returned unchanged. This means that you need to check if the message is: - Committed - Verifies ...before using the output of this function as the 'canonical' message. This is such that expensive operations like signature verification are not performed unless necessary. ### with_only_committers/2 ### `with_only_committers(Msg, Committers) -> any()` Return the message with only the specified committers attached. ### with_only_committers/3 ### `with_only_committers(Msg, Committers, Opts) -> any()` ### without_commitments/3 ### `without_commitments(Spec, Msg, Opts) -> any()` Filter messages that match the 'spec' given. Inverts the `with_commitments/2` function, such that only messages that do _not_ match the spec are returned. --- END OF FILE: docs/resources/source-code/hb_message.md --- --- START OF FILE: docs/resources/source-code/hb_metrics_collector.md --- # [Module hb_metrics_collector.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_metrics_collector.erl) __Behaviours:__ [`prometheus_collector`](prometheus_collector.md). ## Function Index ##
collect_metrics/2
collect_mf/2
create_gauge/3*
deregister_cleanup/1
## Function Details ## ### collect_metrics/2 ### `collect_metrics(X1, SystemLoad) -> any()` ### collect_mf/2 ### `collect_mf(Registry, Callback) -> any()` ### create_gauge/3 * ### `create_gauge(Name, Help, Data) -> any()` ### deregister_cleanup/1 ### `deregister_cleanup(X1) -> any()` --- END OF FILE: docs/resources/source-code/hb_metrics_collector.md --- --- START OF FILE: docs/resources/source-code/hb_name.md --- # [Module hb_name.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_name.erl) An abstraction for name registration/deregistration in HyperBEAM. ## Description ## Its motivation is to provide a way to register names that are not necessarily atoms, but can be any term (for example: hashpaths or `process@1.0` IDs). An important characteristic of these functions is that they are atomic: There can only ever be one registrant for a given name at a time. ## Function Index ##
all/0List the names in the registry.
all_test/0*
atom_test/0*
basic_test/1*
cleanup_test/0*
concurrency_test/0*
dead_process_test/0*
ets_lookup/1*
lookup/1Lookup a name -> PID.
register/1Register a name.
register/2
spawn_test_workers/1*
start/0
start_ets/0*
term_test/0*
unregister/1Unregister a name.
wait_for_cleanup/2*
## Function Details ## ### all/0 ### `all() -> any()` List the names in the registry. ### all_test/0 * ### `all_test() -> any()` ### atom_test/0 * ### `atom_test() -> any()` ### basic_test/1 * ### `basic_test(Term) -> any()` ### cleanup_test/0 * ### `cleanup_test() -> any()` ### concurrency_test/0 * ### `concurrency_test() -> any()` ### dead_process_test/0 * ### `dead_process_test() -> any()` ### ets_lookup/1 * ### `ets_lookup(Name) -> any()` ### lookup/1 ### `lookup(Name) -> any()` Lookup a name -> PID. ### register/1 ### `register(Name) -> any()` Register a name. If the name is already registered, the registration will fail. The name can be any Erlang term. ### register/2 ### `register(Name, Pid) -> any()` ### spawn_test_workers/1 * ### `spawn_test_workers(Name) -> any()` ### start/0 ### `start() -> any()` ### start_ets/0 * ### `start_ets() -> any()` ### term_test/0 * ### `term_test() -> any()` ### unregister/1 ### `unregister(Name) -> any()` Unregister a name. ### wait_for_cleanup/2 * ### `wait_for_cleanup(Name, Retries) -> any()` --- END OF FILE: docs/resources/source-code/hb_name.md --- --- START OF FILE: docs/resources/source-code/hb_opts.md --- # [Module hb_opts.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_opts.erl) A module for interacting with local and global options inside HyperBEAM. ## Description ## Options are set globally, but can also be overridden using an an optional local `Opts` map argument. Many functions across the HyperBEAM environment accept an `Opts` argument, which can be used to customize behavior. Options set in an `Opts` map must _never_ change the behavior of a function that should otherwise be deterministic. Doing so may lead to loss of funds by the HyperBEAM node operator, as the results of their executions will be different than those of other node operators. If they are economically staked on the correctness of these results, they may experience punishments for non-verifiable behavior. Instead, if a local node setting makes deterministic behavior impossible, the caller should fail the execution with a refusal to execute. ## Function Index ##
as/2Find a given identity from the identities map, and return the options merged with the sub-options for that identity.
cached_os_env/2*Cache the result of os:getenv/1 in the process dictionary, as it never changes during the lifetime of a node.
check_required_opts/2Utility function to check for required options in a list.
config_lookup/3*An abstraction for looking up configuration variables.
default_message/0The default configuration options of the hyperbeam node.
ensure_node_history/2Ensures all items in a node history meet required configuration options.
get/1Get an option from the global options, optionally overriding with a local Opts map if prefer or only is set to local.
get/2
get/3
global_get/3*Get an environment variable or configuration key.
identities/1Find all known IDs and their sub-options from the priv_ids map.
identities/2*
load/1Parse a flat@1.0 encoded file into a map, matching the types of the keys to those in the default message.
load/2
load_bin/2
mimic_default_types/3Mimic the types of the default message for a given map.
normalize_default/1*Get an option from environment variables, optionally consulting the hb_features of the node if a conditional default tuple is provided.
## Function Details ## ### as/2 ### `as(Identity, Opts) -> any()` Find a given identity from the `identities` map, and return the options merged with the sub-options for that identity. ### cached_os_env/2 * ### `cached_os_env(Key, DefaultValue) -> any()` Cache the result of os:getenv/1 in the process dictionary, as it never changes during the lifetime of a node. ### check_required_opts/2 ###

check_required_opts(KeyValuePairs::[{binary(), term()}], Opts::map()) -> {ok, map()} | {error, binary()}

`KeyValuePairs`: A list of {Name, Value} pairs to check.
`Opts`: The original options map to return if validation succeeds.
returns: `{ok, Opts}` if all required options are present, or `{error, <<"Missing required parameters: ", MissingOptsStr/binary>>}` where `MissingOptsStr` is a comma-separated list of missing option names. Utility function to check for required options in a list. Takes a list of {Name, Value} pairs and returns: - {ok, Opts} when all required options are present (Value =/= not_found) - {error, ErrorMsg} with a message listing all missing options when any are not_found ### config_lookup/3 * ### `config_lookup(Key, Default, Opts) -> any()` An abstraction for looking up configuration variables. In the future, this is the function that we will want to change to support a more dynamic configuration system. ### default_message/0 ### `default_message() -> any()` The default configuration options of the hyperbeam node. ### ensure_node_history/2 ###

ensure_node_history(NodeHistory::list() | term(), RequiredOpts::map()) -> {ok, binary()} | {error, binary()}

`RequiredOpts`: A map of options that must be present and unchanging
returns: `{ok, <<"valid">>}` when validation passes `{error, <<"missing_keys">>}` when required keys are missing from first item `{error, <<"invalid_values">>}` when first item values don't match requirements `{error, <<"modified_required_key">>}` when history items modify required keys `{error, <<"validation_failed">>}` when other validation errors occur Ensures all items in a node history meet required configuration options. This function verifies that the first item (complete opts) contains all required configuration options and that their values match the expected format. Then it validates that subsequent history items (which represent differences) never modify any of the required keys from the first item. Validation is performed in two steps: 1. Checks that the first item has all required keys and valid values 2. Verifies that subsequent items don't modify any required keys from the first item ### get/1 ### `get(Key) -> any()` Get an option from the global options, optionally overriding with a local `Opts` map if `prefer` or `only` is set to `local`. If the `only` option is provided in the `local` map, only keys found in the corresponding (`local` or `global`) map will be returned. This function also offers users a way to specify a default value to return if the option is not set. `prefer` defaults to `local`. ### get/2 ### `get(Key, Default) -> any()` ### get/3 ### `get(Key, Default, Opts) -> any()` ### global_get/3 * ### `global_get(Key, Default, Opts) -> any()` Get an environment variable or configuration key. ### identities/1 ### `identities(Opts) -> any()` Find all known IDs and their sub-options from the `priv_ids` map. Allows the identities to be named, or based on addresses. The results are normalized such that the map returned by this function contains both mechanisms for finding an identity and its sub-options. Additionally, sub-options are also normalized such that the `address` property is present and accurate for all given identities. ### identities/2 * ### `identities(Default, Opts) -> any()` ### load/1 ### `load(Path) -> any()` Parse a `flat@1.0` encoded file into a map, matching the types of the keys to those in the default message. ### load/2 ### `load(Path, Opts) -> any()` ### load_bin/2 ### `load_bin(Bin, Opts) -> any()` ### mimic_default_types/3 ### `mimic_default_types(Map, Mode, Opts) -> any()` Mimic the types of the default message for a given map. ### normalize_default/1 * ### `normalize_default(Default) -> any()` Get an option from environment variables, optionally consulting the `hb_features` of the node if a conditional default tuple is provided. --- END OF FILE: docs/resources/source-code/hb_opts.md --- --- START OF FILE: docs/resources/source-code/hb_path.md --- # [Module hb_path.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_path.erl) This module provides utilities for manipulating the paths of a message: Its request path (referred to in messages as just the `Path`), and its HashPath. ## Description ## A HashPath is a rolling Merkle list of the messages that have been applied in order to generate a given message. Because applied messages can themselves be the result of message applications with the AO-Core protocol, the HashPath can be thought of as the tree of messages that represent the history of a given message. The initial message on a HashPath is referred to by its ID and serves as its user-generated 'root'. Specifically, the HashPath can be generated by hashing the previous HashPath and the current message. This means that each message in the HashPath is dependent on all previous messages. ``` Msg1.HashPath = Msg1.ID Msg3.HashPath = Msg1.Hash(Msg1.HashPath, Msg2.ID) Msg3.{...} = AO-Core.apply(Msg1, Msg2) ... ``` A message's ID itself includes its HashPath, leading to the mixing of a Msg2's merkle list into the resulting Msg3's HashPath. This allows a single message to represent a history _tree_ of all of the messages that were applied to generate it -- rather than just a linear history. A message may also specify its own algorithm for generating its HashPath, which allows for custom logic to be used for representing the history of a message. When Msg2's are applied to a Msg1, the resulting Msg3's HashPath will be generated according to Msg1's algorithm choice. ## Function Index ##
do_to_binary/1*
from_message/3Extract the request path or hashpath from a message.
hashpath/2Add an ID of a Msg2 to the HashPath of another message.
hashpath/3
hashpath/4
hashpath_alg/2Get the hashpath function for a message from its HashPath-Alg.
hashpath_direct_msg2_test/0*
hashpath_test/0*
hd/2Extract the first key from a Message2's Path field.
hd_test/0*
matches/2Check if two keys match.
multiple_hashpaths_test/0*
normalize/1Normalize a path to a binary, removing the leading slash if present.
pop_from_message_test/0*
pop_from_path_list_test/0*
pop_request/2Pop the next element from a request path or path list.
priv_remaining/2Return the Remaining-Path of a message, from its hidden AO-Core key.
priv_store_remaining/2Store the remaining path of a message in its hidden AO-Core key.
priv_store_remaining/3
push_request/2Add a message to the head (next to execute) of a request path.
push_request/3
queue_request/2Queue a message at the back of a request path.
queue_request/3
regex_matches/2Check if two keys match using regex.
regex_matches_test/0*
term_to_path_parts/1Convert a term into an executable path.
term_to_path_parts/2
term_to_path_parts_test/0*
tl/2Return the message without its first path element.
tl_test/0*
to_binary/1Convert a path of any form to a binary.
to_binary_test/0*
validate_path_transitions/2*
verify_hashpath/2Verify the HashPath of a message, given a list of messages that represent its history.
verify_hashpath_test/0*
## Function Details ## ### do_to_binary/1 * ### `do_to_binary(Path) -> any()` ### from_message/3 ### `from_message(Type, Link, Opts) -> any()` Extract the request path or hashpath from a message. We do not use AO-Core for this resolution because this function is called from inside AO-Core itself. This imparts a requirement: the message's device must store a viable hashpath and path in its Erlang map at all times, unless the message is directly from a user (in which case paths and hashpaths will not have been assigned yet). ### hashpath/2 ### `hashpath(Bin, Opts) -> any()` Add an ID of a Msg2 to the HashPath of another message. ### hashpath/3 ### `hashpath(Msg1, Msg2, Opts) -> any()` ### hashpath/4 ### `hashpath(Msg1, Msg2, HashpathAlg, Opts) -> any()` ### hashpath_alg/2 ### `hashpath_alg(Msg, Opts) -> any()` Get the hashpath function for a message from its HashPath-Alg. If no hashpath algorithm is specified, the protocol defaults to `sha-256-chain`. ### hashpath_direct_msg2_test/0 * ### `hashpath_direct_msg2_test() -> any()` ### hashpath_test/0 * ### `hashpath_test() -> any()` ### hd/2 ### `hd(Msg2, Opts) -> any()` Extract the first key from a `Message2`'s `Path` field. Note: This function uses the `dev_message:get/2` function, rather than a generic call as the path should always be an explicit key in the message. ### hd_test/0 * ### `hd_test() -> any()` ### matches/2 ### `matches(Key1, Key2) -> any()` Check if two keys match. ### multiple_hashpaths_test/0 * ### `multiple_hashpaths_test() -> any()` ### normalize/1 ### `normalize(Path) -> any()` Normalize a path to a binary, removing the leading slash if present. ### pop_from_message_test/0 * ### `pop_from_message_test() -> any()` ### pop_from_path_list_test/0 * ### `pop_from_path_list_test() -> any()` ### pop_request/2 ### `pop_request(Msg, Opts) -> any()` Pop the next element from a request path or path list. ### priv_remaining/2 ### `priv_remaining(Msg, Opts) -> any()` Return the `Remaining-Path` of a message, from its hidden `AO-Core` key. Does not use the `get` or set `hb_private` functions, such that it can be safely used inside the main AO-Core resolve function. ### priv_store_remaining/2 ### `priv_store_remaining(Msg, RemainingPath) -> any()` Store the remaining path of a message in its hidden `AO-Core` key. ### priv_store_remaining/3 ### `priv_store_remaining(Msg, RemainingPath, Opts) -> any()` ### push_request/2 ### `push_request(Msg, Path) -> any()` Add a message to the head (next to execute) of a request path. ### push_request/3 ### `push_request(Msg, Path, Opts) -> any()` ### queue_request/2 ### `queue_request(Msg, Path) -> any()` Queue a message at the back of a request path. `path` is the only key that we cannot use dev_message's `set/3` function for (as it expects the compute path to be there), so we use `hb_maps:put/3` instead. ### queue_request/3 ### `queue_request(Msg, Path, Opts) -> any()` ### regex_matches/2 ### `regex_matches(Path1, Path2) -> any()` Check if two keys match using regex. ### regex_matches_test/0 * ### `regex_matches_test() -> any()` ### term_to_path_parts/1 ### `term_to_path_parts(Path) -> any()` Convert a term into an executable path. Supports binaries, lists, and atoms. Notably, it does not support strings as lists of characters. ### term_to_path_parts/2 ### `term_to_path_parts(Link, Opts) -> any()` ### term_to_path_parts_test/0 * ### `term_to_path_parts_test() -> any()` ### tl/2 ### `tl(Msg2, Opts) -> any()` Return the message without its first path element. Note that this is the only transformation in AO-Core that does _not_ make a log of its transformation. Subsequently, the message's IDs will not be verifiable after executing this transformation. This may or may not be the mainnet behavior we want. ### tl_test/0 * ### `tl_test() -> any()` ### to_binary/1 ### `to_binary(Path) -> any()` Convert a path of any form to a binary. ### to_binary_test/0 * ### `to_binary_test() -> any()` ### validate_path_transitions/2 * ### `validate_path_transitions(X, Opts) -> any()` ### verify_hashpath/2 ### `verify_hashpath(Rest, Opts) -> any()` Verify the HashPath of a message, given a list of messages that represent its history. ### verify_hashpath_test/0 * ### `verify_hashpath_test() -> any()` --- END OF FILE: docs/resources/source-code/hb_path.md --- --- START OF FILE: docs/resources/source-code/hb_persistent.md --- # [Module hb_persistent.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_persistent.erl) Creates and manages long-lived AO-Core resolution processes. ## Description ## These can be useful for situations where a message is large and expensive to serialize and deserialize, or when executions should be deliberately serialized to avoid parallel executions of the same computation. This module is called during the core `hb_ao` execution process, so care must be taken to avoid recursive spawns/loops. Built using the `pg` module, which is a distributed Erlang process group manager. ## Function Index ##
await/4If there was already an Erlang process handling this execution, we should register with them and wait for them to notify us of completion.
deduplicated_execution_test/0*Test merging and returning a value with a persistent worker.
default_await/5Default await function that waits for a resolution from a worker.
default_grouper/3Create a group name from a Msg1 and Msg2 pair as a tuple.
default_worker/3A server function for handling persistent executions.
do_monitor/3*
find_execution/2*Find a group with the given name.
find_or_register/3Register the process to lead an execution if none is found, otherwise signal that we should await resolution.
find_or_register/4*
forward_work/2Forward requests to a newly delegated execution process.
group/3Calculate the group name for a Msg1 and Msg2 pair.
notify/4Check our inbox for processes that are waiting for the resolution of this execution.
persistent_worker_test/0*Test spawning a default persistent worker.
register_groupname/2*Register for performing an AO-Core resolution.
send_response/4*Helper function that wraps responding with a new Msg3.
spawn_after_execution_test/0*
spawn_test_client/2*
spawn_test_client/3*
start/0*Ensure that the pg module is started.
start_monitor/0Start a monitor that prints the current members of the group every n seconds.
start_monitor/1
start_monitor/2*
start_worker/2Start a worker process that will hold a message in memory for future executions.
start_worker/3
stop_monitor/1
test_device/0*
test_device/1*
unregister/3*Unregister for being the leader on an AO-Core resolution.
unregister_groupname/2*
unregister_notify/4Unregister as the leader for an execution and notify waiting processes.
wait_for_test_result/1*
worker_event/5*Log an event with the worker process.
## Function Details ## ### await/4 ### `await(Worker, Msg1, Msg2, Opts) -> any()` If there was already an Erlang process handling this execution, we should register with them and wait for them to notify us of completion. ### deduplicated_execution_test/0 * ### `deduplicated_execution_test() -> any()` Test merging and returning a value with a persistent worker. ### default_await/5 ### `default_await(Worker, GroupName, Msg1, Msg2, Opts) -> any()` Default await function that waits for a resolution from a worker. ### default_grouper/3 ### `default_grouper(Msg1, Msg2, Opts) -> any()` Create a group name from a Msg1 and Msg2 pair as a tuple. ### default_worker/3 ### `default_worker(GroupName, Msg1, Opts) -> any()` A server function for handling persistent executions. ### do_monitor/3 * ### `do_monitor(Group, Last, Opts) -> any()` ### find_execution/2 * ### `find_execution(Groupname, Opts) -> any()` Find a group with the given name. ### find_or_register/3 ### `find_or_register(Msg1, Msg2, Opts) -> any()` Register the process to lead an execution if none is found, otherwise signal that we should await resolution. ### find_or_register/4 * ### `find_or_register(GroupName, Msg1, Msg2, Opts) -> any()` ### forward_work/2 ### `forward_work(NewPID, Opts) -> any()` Forward requests to a newly delegated execution process. ### group/3 ### `group(Msg1, Msg2, Opts) -> any()` Calculate the group name for a Msg1 and Msg2 pair. Uses the Msg1's `group` function if it is found in the `info`, otherwise uses the default. ### notify/4 ### `notify(GroupName, Msg2, Msg3, Opts) -> any()` Check our inbox for processes that are waiting for the resolution of this execution. Comes in two forms: 1. Notify on group name alone. 2. Notify on group name and Msg2. ### persistent_worker_test/0 * ### `persistent_worker_test() -> any()` Test spawning a default persistent worker. ### register_groupname/2 * ### `register_groupname(Groupname, Opts) -> any()` Register for performing an AO-Core resolution. ### send_response/4 * ### `send_response(Listener, GroupName, Msg2, Msg3) -> any()` Helper function that wraps responding with a new Msg3. ### spawn_after_execution_test/0 * ### `spawn_after_execution_test() -> any()` ### spawn_test_client/2 * ### `spawn_test_client(Msg1, Msg2) -> any()` ### spawn_test_client/3 * ### `spawn_test_client(Msg1, Msg2, Opts) -> any()` ### start/0 * ### `start() -> any()` Ensure that the `pg` module is started. ### start_monitor/0 ### `start_monitor() -> any()` Start a monitor that prints the current members of the group every n seconds. ### start_monitor/1 ### `start_monitor(Group) -> any()` ### start_monitor/2 * ### `start_monitor(Group, Opts) -> any()` ### start_worker/2 ### `start_worker(Msg, Opts) -> any()` Start a worker process that will hold a message in memory for future executions. ### start_worker/3 ### `start_worker(GroupName, NotMsg, Opts) -> any()` ### stop_monitor/1 ### `stop_monitor(PID) -> any()` ### test_device/0 * ### `test_device() -> any()` ### test_device/1 * ### `test_device(Base) -> any()` ### unregister/3 * ### `unregister(Msg1, Msg2, Opts) -> any()` Unregister for being the leader on an AO-Core resolution. ### unregister_groupname/2 * ### `unregister_groupname(Groupname, Opts) -> any()` ### unregister_notify/4 ### `unregister_notify(GroupName, Msg2, Msg3, Opts) -> any()` Unregister as the leader for an execution and notify waiting processes. ### wait_for_test_result/1 * ### `wait_for_test_result(Ref) -> any()` ### worker_event/5 * ### `worker_event(Group, Data, Msg1, Msg2, Opts) -> any()` Log an event with the worker process. If we used the default grouper function, we should also include the Msg1 and Msg2 in the event. If we did not, we assume that the group name expresses enough information to identify the request. --- END OF FILE: docs/resources/source-code/hb_persistent.md --- --- START OF FILE: docs/resources/source-code/hb_private.md --- # [Module hb_private.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_private.erl) This module provides basic helper utilities for managing the private element of a message, which can be used to store state that is not included in serialized messages, or those granted to users via the APIs. ## Description ## Private elements of a message can be useful for storing state that is only relevant temporarily. For example, a device might use the private element to store a cache of values that are expensive to recompute. They should _not_ be used for encoding state that makes the execution of a device non-deterministic (unless you are sure you know what you are doing). The `set` and `get` functions of this module allow you to run those keys as AO-Core paths if you would like to have private `devices` in the messages non-public zone. See `hb_ao` for more information about the AO-Core protocol and private elements of messages. ## Function Index ##
from_message/1Return the private key from a message.
get/3Helper for getting a value from the private element of a message.
get/4
get_private_key_test/0*
is_private/1Check if a key is private.
priv_ao_opts/1*The opts map that should be used when resolving paths against the private element of a message.
remove_private_specifier/2*Remove the first key from the path if it is a private specifier.
reset/1Unset all of the private keys in a message.
set/3
set/4Helper function for setting a key in the private element of a message.
set_priv/2Helper function for setting the complete private element of a message.
set_private_test/0*
## Function Details ## ### from_message/1 ### `from_message(Msg) -> any()` Return the `private` key from a message. If the key does not exist, an empty map is returned. ### get/3 ### `get(Key, Msg, Opts) -> any()` Helper for getting a value from the private element of a message. Uses AO-Core resolve under-the-hood, removing the private specifier from the path if it exists. ### get/4 ### `get(InputPath, Msg, Default, Opts) -> any()` ### get_private_key_test/0 * ### `get_private_key_test() -> any()` ### is_private/1 ### `is_private(Key) -> any()` Check if a key is private. ### priv_ao_opts/1 * ### `priv_ao_opts(Opts) -> any()` The opts map that should be used when resolving paths against the private element of a message. ### remove_private_specifier/2 * ### `remove_private_specifier(InputPath, Opts) -> any()` Remove the first key from the path if it is a private specifier. ### reset/1 ### `reset(Msg) -> any()` Unset all of the private keys in a message. ### set/3 ### `set(Msg, PrivMap, Opts) -> any()` ### set/4 ### `set(Msg, InputPath, Value, Opts) -> any()` Helper function for setting a key in the private element of a message. ### set_priv/2 ### `set_priv(Msg, PrivMap) -> any()` Helper function for setting the complete private element of a message. ### set_private_test/0 * ### `set_private_test() -> any()` --- END OF FILE: docs/resources/source-code/hb_private.md --- --- START OF FILE: docs/resources/source-code/hb_process_monitor.md --- # [Module hb_process_monitor.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_process_monitor.erl) ## Function Index ##
handle_crons/1*
server/1*
start/1
start/2
start/3
stop/1
ticker/2*
## Function Details ## ### handle_crons/1 * ### `handle_crons(State) -> any()` ### server/1 * ### `server(State) -> any()` ### start/1 ### `start(ProcID) -> any()` ### start/2 ### `start(ProcID, Rate) -> any()` ### start/3 ### `start(ProcID, Rate, Cursor) -> any()` ### stop/1 ### `stop(PID) -> any()` ### ticker/2 * ### `ticker(Monitor, Rate) -> any()` --- END OF FILE: docs/resources/source-code/hb_process_monitor.md --- --- START OF FILE: docs/resources/source-code/hb_router.md --- # [Module hb_router.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_router.erl) ## Function Index ##
find/2
find/3
find/4*
## Function Details ## ### find/2 ### `find(Type, ID) -> any()` ### find/3 ### `find(Type, ID, Address) -> any()` ### find/4 * ### `find(Type, ID, Address, Opts) -> any()` --- END OF FILE: docs/resources/source-code/hb_router.md --- --- START OF FILE: docs/resources/source-code/hb_singleton.md --- # [Module hb_singleton.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_singleton.erl) A parser that translates AO-Core HTTP API requests in TABM format into an ordered list of messages to evaluate. ## Description ## The details of this format are described in `docs/ao-core-http-api.md`. Syntax overview: ``` Singleton: Message containing keys and a path field, which may also contain a query string of key-value pairs. Path: - /Part1/Part2/.../PartN/ => [Part1, Part2, ..., PartN] - /ID/Part2/.../PartN => [ID, Part2, ..., PartN] Part: (Key + Resolution), Device?, #{ K => V}? - Part => #{ path => Part } - Part&Key=Value => #{ path => Part, Key => Value } - Part&Key => #{ path => Part, Key => true } - Part&k1=v1&k2=v2 => #{ path => Part, k1 => `<<"v1">>, k2 => <<"v2">> }' - Part~Device => {as, Device, #{ path => Part }} - Part~D&K1=V1 => {as, D, #{ path => Part, K1 => `<<"v1">> }}' - pt&k1+int=1 => #{ path => pt, k1 => 1 } - pt~d&k1+int=1 => {as, d, #{ path => pt, k1 => 1 }} - (/nested/path) => Resolution of the path /nested/path - (/nested/path&k1=v1) => (resolve /nested/path)#{k1 => v1} - (/nested/path~D&K1=V1) => (resolve /nested/path)#{K1 => V1} - pt&k1+res=(/a/b/c) => #{ path => pt, k1 => (resolve /a/b/c) } Key: - key: <<"value">> => #{ key => <<"value">>, ... } for all messages - n.key: <<"value">> => #{ key => <<"value">>, ... } for Nth message - key+int: 1 => #{ key => 1, ... } - key+res: /nested/path => #{ key => (resolve /nested/path), ... } - N.Key+res=(/a/b/c) => #{ Key => (resolve /a/b/c), ... } ``` ## Data Types ## ### ao_message() ###

ao_message() = map() | binary()
### tabm_message() ###

tabm_message() = map()
## Function Index ##
all_path_parts/2*Extract all of the parts from the binary, given (a list of) separators.
append_path/2*
apply_types/2*Step 3: Apply types to values and remove specifiers.
basic_hashpath_test/0*
basic_hashpath_to_test/0*
build_messages/3*Step 5: Merge the base message with the scoped messages.
decode_string/1*Attempt Cowboy URL decode, then sanitize the result.
do_build/4*
from/2Normalize a singleton TABM message into a list of executable AO-Core messages.
group_scoped/2*Step 4: Group headers/query by N-scope.
inlined_keys_test/0*
inlined_keys_to_test/0*
maybe_join/2*Join a list of items with a separator, or return the first item if there is only one item.
maybe_subpath/2*Check if the string is a subpath, returning it in parsed form, or the original string with a specifier.
maybe_typed/3*Parse a key's type (applying it to the value) and device name if present.
multiple_inlined_keys_test/0*
multiple_inlined_keys_to_test/0*
multiple_messages_test/0*
multiple_messages_to_test/0*
normalize_base/1*Normalize the base path.
parse_explicit_message_test/0*
parse_full_path/1*Parse the relative reference into path, query, and fragment.
parse_inlined_key_val/2*Extrapolate the inlined key-value pair from a path segment.
parse_part/2*Parse a path part into a message or an ID.
parse_part_mods/3*Parse part modifiers: 1.
parse_scope/1*Get the scope of a key.
part/2*Extract the characters from the binary until a separator is found.
part/4*
path_messages/2*Step 2: Decode, split and sanitize the path.
path_parts/2*Split the path into segments, filtering out empty segments and segments that are too long.
path_parts_test/0*
scoped_key_test/0*
scoped_key_to_test/0*
simple_to_test/0*
single_message_test/0*
subpath_in_inlined_test/0*
subpath_in_inlined_to_test/0*
subpath_in_key_test/0*
subpath_in_key_to_test/0*
subpath_in_path_test/0*
subpath_in_path_to_test/0*
to/1Convert a list of AO-Core message into TABM message.
to_suite_test_/0*
type/1*
typed_key_test/0*
typed_key_to_test/0*
## Function Details ## ### all_path_parts/2 * ### `all_path_parts(Sep, Bin) -> any()` Extract all of the parts from the binary, given (a list of) separators. ### append_path/2 * ### `append_path(PathPart, Message) -> any()` ### apply_types/2 * ### `apply_types(Msg, Opts) -> any()` Step 3: Apply types to values and remove specifiers. ### basic_hashpath_test/0 * ### `basic_hashpath_test() -> any()` ### basic_hashpath_to_test/0 * ### `basic_hashpath_to_test() -> any()` ### build_messages/3 * ### `build_messages(Msgs, ScopedModifications, Opts) -> any()` Step 5: Merge the base message with the scoped messages. ### decode_string/1 * ### `decode_string(B) -> any()` Attempt Cowboy URL decode, then sanitize the result. ### do_build/4 * ### `do_build(I, Rest, ScopedKeys, Opts) -> any()` ### from/2 ### `from(RawMsg, Opts) -> any()` Normalize a singleton TABM message into a list of executable AO-Core messages. ### group_scoped/2 * ### `group_scoped(Map, Msgs) -> any()` Step 4: Group headers/query by N-scope. `N.Key` => applies to Nth step. Otherwise => `global` ### inlined_keys_test/0 * ### `inlined_keys_test() -> any()` ### inlined_keys_to_test/0 * ### `inlined_keys_to_test() -> any()` ### maybe_join/2 * ### `maybe_join(Items, Sep) -> any()` Join a list of items with a separator, or return the first item if there is only one item. If there are no items, return an empty binary. ### maybe_subpath/2 * ### `maybe_subpath(Str, Opts) -> any()` Check if the string is a subpath, returning it in parsed form, or the original string with a specifier. ### maybe_typed/3 * ### `maybe_typed(Key, Value, Opts) -> any()` Parse a key's type (applying it to the value) and device name if present. We allow `` characters as type indicators because some URL-string encoders (e.g. Chrome) will encode `+` characters in a form that query-string parsers interpret as `` characters. ### multiple_inlined_keys_test/0 * ### `multiple_inlined_keys_test() -> any()` ### multiple_inlined_keys_to_test/0 * ### `multiple_inlined_keys_to_test() -> any()` ### multiple_messages_test/0 * ### `multiple_messages_test() -> any()` ### multiple_messages_to_test/0 * ### `multiple_messages_to_test() -> any()` ### normalize_base/1 * ### `normalize_base(Rest) -> any()` Normalize the base path. ### parse_explicit_message_test/0 * ### `parse_explicit_message_test() -> any()` ### parse_full_path/1 * ### `parse_full_path(RelativeRef) -> any()` Parse the relative reference into path, query, and fragment. ### parse_inlined_key_val/2 * ### `parse_inlined_key_val(Bin, Opts) -> any()` Extrapolate the inlined key-value pair from a path segment. If the key has a value, it may provide a type (as with typical keys), but if a value is not provided, it is assumed to be a boolean `true`. ### parse_part/2 * ### `parse_part(ID, Opts) -> any()` Parse a path part into a message or an ID. Applies the syntax rules outlined in the module doc, in the following order: 1. ID 2. Part subpath resolutions 3. Inlined key-value pairs 4. Device specifier ### parse_part_mods/3 * ### `parse_part_mods(X1, Msg, Opts) -> any()` Parse part modifiers: 1. `~Device` => `{as, Device, Msg}` 2. `&K=V` => `Msg#{ K => V }` ### parse_scope/1 * ### `parse_scope(KeyBin) -> any()` Get the scope of a key. Adds 1 to account for the base message. ### part/2 * ### `part(Sep, Bin) -> any()` Extract the characters from the binary until a separator is found. The first argument of the function is an explicit separator character, or a list of separator characters. Returns a tuple with the separator, the accumulated characters, and the rest of the binary. ### part/4 * ### `part(Seps, X2, Depth, CurrAcc) -> any()` ### path_messages/2 * ### `path_messages(RawBin, Opts) -> any()` Step 2: Decode, split and sanitize the path. Split by `/` but avoid subpath components, such that their own path parts are not dissociated from their parent path. ### path_parts/2 * ### `path_parts(Sep, PathBin) -> any()` Split the path into segments, filtering out empty segments and segments that are too long. ### path_parts_test/0 * ### `path_parts_test() -> any()` ### scoped_key_test/0 * ### `scoped_key_test() -> any()` ### scoped_key_to_test/0 * ### `scoped_key_to_test() -> any()` ### simple_to_test/0 * ### `simple_to_test() -> any()` ### single_message_test/0 * ### `single_message_test() -> any()` ### subpath_in_inlined_test/0 * ### `subpath_in_inlined_test() -> any()` ### subpath_in_inlined_to_test/0 * ### `subpath_in_inlined_to_test() -> any()` ### subpath_in_key_test/0 * ### `subpath_in_key_test() -> any()` ### subpath_in_key_to_test/0 * ### `subpath_in_key_to_test() -> any()` ### subpath_in_path_test/0 * ### `subpath_in_path_test() -> any()` ### subpath_in_path_to_test/0 * ### `subpath_in_path_to_test() -> any()` ### to/1 ###

to(Messages::[ao_message()]) -> tabm_message()

Convert a list of AO-Core message into TABM message. ### to_suite_test_/0 * ### `to_suite_test_() -> any()` ### type/1 * ### `type(Value) -> any()` ### typed_key_test/0 * ### `typed_key_test() -> any()` ### typed_key_to_test/0 * ### `typed_key_to_test() -> any()` --- END OF FILE: docs/resources/source-code/hb_singleton.md --- --- START OF FILE: docs/resources/source-code/hb_store_fs.md --- # [Module hb_store_fs.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_store_fs.erl) A key-value store implementation, following the `hb_store` behavior and interface. ## Description ## This implementation utilizes the node's local file system as its storage mechanism, offering an alternative to other store's that require the compilation of additional libraries in order to function. As this store implementation operates using Erlang's native `file` and `filelib` mechanisms, it largely inherits its performance characteristics from those of the underlying OS/filesystem drivers. Certain filesystems can be quite performant for the types of workload that HyperBEAM AO-Core execution requires (many reads and writes to explicit keys, few directory 'listing' or search operations), awhile others perform suboptimally. Additionally, thisstore implementation offers the ability for simple integration of HyperBEAM with other non-volatile storage media: `hb_store_fs` will interact with any service that implements the host operating system's native filesystem API. By mounting devices via `FUSE` (etc), HyperBEAM is able to interact with a large number of existing storage systems (for example, S3-compatible cloud storage APIs, etc). ## Function Index ##
add_prefix/2*Add the directory prefix to a path.
list/2List contents of a directory in the store.
make_group/2Create a directory (group) in the store.
make_link/3Create a symlink, handling the case where the link would point to itself.
read/1*
read/2Read a key from the store, following symlinks as needed.
remove_prefix/2*Remove the directory prefix from a path.
reset/1Reset the store by completely removing its directory and recreating it.
resolve/2Replace links in a path successively, returning the final path.
resolve/3*
scope/0The file-based store is always local, for now.
scope/1
start/1Initialize the file system store with the given data directory.
stop/1Stop the file system store.
type/1*
type/2Determine the type of a key in the store.
write/3Write a value to the specified path in the store.
## Function Details ## ### add_prefix/2 * ### `add_prefix(X1, Path) -> any()` Add the directory prefix to a path. ### list/2 ### `list(Opts, Path) -> any()` List contents of a directory in the store. ### make_group/2 ### `make_group(Opts, Path) -> any()` Create a directory (group) in the store. ### make_link/3 ### `make_link(Opts, Link, New) -> any()` Create a symlink, handling the case where the link would point to itself. ### read/1 * ### `read(Path) -> any()` ### read/2 ### `read(Opts, Key) -> any()` Read a key from the store, following symlinks as needed. ### remove_prefix/2 * ### `remove_prefix(X1, Path) -> any()` Remove the directory prefix from a path. ### reset/1 ### `reset(X1) -> any()` Reset the store by completely removing its directory and recreating it. ### resolve/2 ### `resolve(Opts, RawPath) -> any()` Replace links in a path successively, returning the final path. Each element of the path is resolved in turn, with the result of each resolution becoming the prefix for the next resolution. This allows paths to resolve across many links. For example, a structure as follows: /a/b/c: "Not the right data" /a/b -> /a/alt-b /a/alt-b/c: "Correct data" will resolve "a/b/c" to "Correct data". ### resolve/3 * ### `resolve(Opts, CurrPath, Rest) -> any()` ### scope/0 ### `scope() -> any()` The file-based store is always local, for now. In the future, we may want to allow that an FS store is shared across a cluster and thus remote. ### scope/1 ### `scope(X1) -> any()` ### start/1 ### `start(X1) -> any()` Initialize the file system store with the given data directory. ### stop/1 ### `stop(X1) -> any()` Stop the file system store. Currently a no-op. ### type/1 * ### `type(Path) -> any()` ### type/2 ### `type(Opts, Key) -> any()` Determine the type of a key in the store. ### write/3 ### `write(Opts, PathComponents, Value) -> any()` Write a value to the specified path in the store. --- END OF FILE: docs/resources/source-code/hb_store_fs.md --- --- START OF FILE: docs/resources/source-code/hb_store_gateway.md --- # [Module hb_store_gateway.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_store_gateway.erl) A store module that reads data from the nodes Arweave gateway and GraphQL routes, additionally including additional store-specific routes. ## Function Index ##
cache_read_message_test/0*Ensure that saving to the gateway store works.
external_http_access_test/0*Test that the default node config allows for data to be accessed.
graphql_as_store_test_/0*Store is accessible via the default options.
graphql_from_cache_test/0*Stored messages are accessible via hb_cache accesses.
list/2
manual_local_cache_test/0*
maybe_cache/2*Cache the data if the cache is enabled.
read/2Read the data at the given key from the GraphQL route.
resolve/2
scope/1The scope of a GraphQL store is always remote, due to performance.
specific_route_test/0*Routes can be specified in the options, overriding the default routes.
store_opts_test/0*Test to verify store opts is being set for Data-Protocol ao.
type/2Get the type of the data at the given key.
verifiability_test/0*Test that items retreived from the gateway store are verifiable.
## Function Details ## ### cache_read_message_test/0 * ### `cache_read_message_test() -> any()` Ensure that saving to the gateway store works. ### external_http_access_test/0 * ### `external_http_access_test() -> any()` Test that the default node config allows for data to be accessed. ### graphql_as_store_test_/0 * ### `graphql_as_store_test_() -> any()` Store is accessible via the default options. ### graphql_from_cache_test/0 * ### `graphql_from_cache_test() -> any()` Stored messages are accessible via `hb_cache` accesses. ### list/2 ### `list(StoreOpts, Key) -> any()` ### manual_local_cache_test/0 * ### `manual_local_cache_test() -> any()` ### maybe_cache/2 * ### `maybe_cache(StoreOpts, Data) -> any()` Cache the data if the cache is enabled. The `store` option may either be `false` to disable local caching, or a store definition to use as the cache. ### read/2 ### `read(StoreOpts, Key) -> any()` Read the data at the given key from the GraphQL route. Will only attempt to read the data if the key is an ID. ### resolve/2 ### `resolve(X1, Key) -> any()` ### scope/1 ### `scope(X1) -> any()` The scope of a GraphQL store is always remote, due to performance. ### specific_route_test/0 * ### `specific_route_test() -> any()` Routes can be specified in the options, overriding the default routes. We test this by inversion: If the above cache read test works, then we know that the default routes allow access to the item. If the test below were to produce the same result, despite an empty 'only' route list, then we would know that the module is not respecting the route list. ### store_opts_test/0 * ### `store_opts_test() -> any()` Test to verify store opts is being set for Data-Protocol ao ### type/2 ### `type(StoreOpts, Key) -> any()` Get the type of the data at the given key. We potentially cache the result, so that we don't have to read the data from the GraphQL route multiple times. ### verifiability_test/0 * ### `verifiability_test() -> any()` Test that items retreived from the gateway store are verifiable. --- END OF FILE: docs/resources/source-code/hb_store_gateway.md --- --- START OF FILE: docs/resources/source-code/hb_store_lmdb.md --- # [Module hb_store_lmdb.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_store_lmdb.erl) An LMDB (Lightning Memory Database) implementation of the HyperBeam store interface. ## Description ## This module provides a persistent key-value store backend using LMDB, which is a high-performance embedded transactional database. The implementation follows a singleton pattern where each database environment gets its own dedicated server process to manage transactions and coordinate writes. Key features include: * Asynchronous writes with batched transactions for performance * Automatic link resolution for creating symbolic references between keys * Group support for organizing hierarchical data structures * Prefix-based key listing for directory-like navigation * Process-local caching of database handles for efficiency The module implements a dual-flush strategy: writes are accumulated in memory and flushed either after an idle timeout or when explicitly requested during read operations that encounter cache misses. ## Function Index ##
add_path/3Add two path components together.
basic_test/0*Test suite demonstrating basic store operations.
cache_debug_test/0*Debug test to understand cache linking behavior.
cache_style_test/0*Test cache-style usage through hb_store interface.
commit_manager/2*Background process that enforces maximum flush intervals.
create_parent_groups/3*Helper function to recursively create parent groups.
ensure_parent_groups/2*Ensure all parent groups exist for a given path.
ensure_transaction/1*Ensure that the server has an active LMDB transaction for writes.
exact_hb_store_test/0*Test that matches the exact hb_store hierarchical test pattern.
find_env/1*Retrieve or create the LMDB environment handle for a database.
find_pid/1*Locate an existing server process or spawn a new one if needed.
fold_after/4*Fold over a database after a given path.
fold_cursor/5*
group_test/0*Group test - verifies group creation and type detection.
is_link/1*Helper function to check if a value is a link and extract the target.
isolated_type_debug_test/0*Isolated test focusing on the exact cache issue.
link_fragment_test/0*
link_key_list_test/0*Link key list test - verifies symbolic link creation using structured key paths.
link_test/0*Link test - verifies symbolic link creation and resolution.
list/2List all keys that start with a given prefix.
list_test/0*List test - verifies prefix-based key listing functionality.
list_with_link_test/0*Test that list function resolves links correctly.
make_group/2Create a group entry that can contain other keys hierarchically.
make_link/3Create a symbolic link from a new key to an existing key.
nested_map_cache_test/0*Test nested map storage with cache-like linking behavior.
notify_flush/1*Notify all processes waiting for a flush operation to complete.
path/2Transform a path into the store's canonical form.
path_traversal_link_test/0*Path traversal link test - verifies link resolution during path traversal.
read/2Read a value from the database by key, with automatic link resolution.
read_direct/2*Read a value directly from the database with link resolution.
read_with_flush/2*Read with immediate flush for cases where we need to see recent writes.
read_with_retry/2*Unified read function that handles LMDB reads with retry logic.
read_with_retry/3*
reconstruct_map/2*
reset/1Completely delete the database directory and all its contents.
resolve/2Resolve a path by following any symbolic links.
resolve_path_links/2*Resolve links in a path, checking each segment except the last.
resolve_path_links/3*
resolve_path_links_acc/4*
scope/0Return the scope of this storage backend.
scope/1Return the scope of this storage backend (ignores parameters).
server/1*Main server loop that handles database operations and manages transactions.
server_flush/1*Commit the current transaction to disk and clean up state.
server_write/3*Add a key-value pair to the current transaction, creating one if needed.
start/1Start the LMDB storage system for a given database configuration.
stop/1Gracefully shut down the database server and close the environment.
sync/1*Force an immediate flush of all pending writes to disk.
to_path/1*Helper function to convert to a path.
type/2Determine whether a key represents a simple value or composite group.
type_test/0*Type test - verifies type detection for both simple and composite entries.
write/3Write a key-value pair to the database asynchronously.
## Function Details ## ### add_path/3 ### `add_path(Opts, Path1, Path2) -> any()` Add two path components together. For LMDB, this concatenates the path lists. ### basic_test/0 * ### `basic_test() -> any()` Test suite demonstrating basic store operations. The following functions implement unit tests using EUnit to verify that the LMDB store implementation correctly handles various scenarios including basic read/write operations, hierarchical listing, group creation, link resolution, and type detection. Basic store test - verifies fundamental read/write functionality. This test creates a temporary database, writes a key-value pair, reads it back to verify correctness, and cleans up by stopping the database. It serves as a sanity check that the basic storage mechanism is working. ### cache_debug_test/0 * ### `cache_debug_test() -> any()` Debug test to understand cache linking behavior ### cache_style_test/0 * ### `cache_style_test() -> any()` Test cache-style usage through hb_store interface ### commit_manager/2 * ### `commit_manager(Opts, Server) -> any()` Background process that enforces maximum flush intervals. This function runs in a separate process linked to the main server and ensures that transactions are committed within a reasonable time frame even during periods of continuous write activity. It sends periodic flush requests to the main server based on the configured maximum flush time. The commit manager provides a safety net against data loss by preventing transactions from remaining uncommitted indefinitely. It works in conjunction with the idle timeout mechanism to provide comprehensive data safety guarantees. The process runs in an infinite loop, coordinating with the main server through message passing and restarting its timer after each successful flush. ### create_parent_groups/3 * ### `create_parent_groups(Opts, Current, Rest) -> any()` Helper function to recursively create parent groups. ### ensure_parent_groups/2 * ###

ensure_parent_groups(Opts::map(), Path::binary()) -> ok

`Opts`: Database configuration map
`Path`: The path whose parents should exist
returns: ok Ensure all parent groups exist for a given path. This function creates the necessary parent groups for a path, similar to how filesystem stores use ensure_dir. For example, if the path is "a/b/c/file", it will ensure groups "a", "a/b", and "a/b/c" exist. ### ensure_transaction/1 * ### `ensure_transaction(State) -> any()` Ensure that the server has an active LMDB transaction for writes. This function implements lazy transaction creation by checking if a transaction already exists in the server state. If not, it creates a new read-write transaction and opens the default database within it. The lazy approach improves efficiency by avoiding transaction overhead when the server is idle, while ensuring that write operations always have a transaction available when needed. Transactions in LMDB are lightweight but still represent a commitment of resources, so creating them only when needed helps optimize memory usage and system performance. ### exact_hb_store_test/0 * ### `exact_hb_store_test() -> any()` Test that matches the exact hb_store hierarchical test pattern ### find_env/1 * ### `find_env(Opts) -> any()` Retrieve or create the LMDB environment handle for a database. ### find_pid/1 * ### `find_pid(StoreOpts) -> any()` Locate an existing server process or spawn a new one if needed. ### fold_after/4 * ### `fold_after(Opts, Path, Fun, Acc) -> any()` Fold over a database after a given path. The `Fun` is called with the key and value, and the accumulator. ### fold_cursor/5 * ### `fold_cursor(X1, Txn, Cur, Fun, Acc) -> any()` ### group_test/0 * ### `group_test() -> any()` Group test - verifies group creation and type detection. This test creates a group entry and verifies that it is correctly identified as a composite type and cannot be read directly (like filesystem directories). ### is_link/1 * ### `is_link(Value) -> any()` Helper function to check if a value is a link and extract the target. ### isolated_type_debug_test/0 * ### `isolated_type_debug_test() -> any()` Isolated test focusing on the exact cache issue ### link_fragment_test/0 * ### `link_fragment_test() -> any()` ### link_key_list_test/0 * ### `link_key_list_test() -> any()` Link key list test - verifies symbolic link creation using structured key paths. This test demonstrates the store's ability to handle complex key structures represented as lists of binary segments, and verifies that symbolic links work correctly when the target key is specified as a list rather than a flat binary string. The test creates a hierarchical key structure using a list format (which presumably gets converted to a path-like binary internally), creates a symbolic link pointing to that structured key, and verifies that link resolution works transparently to return the original value. This is particularly important for applications that organize data in hierarchical structures where keys represent nested paths or categories, and need to create shortcuts or aliases to deeply nested data. ### link_test/0 * ### `link_test() -> any()` Link test - verifies symbolic link creation and resolution. This test creates a regular key-value pair, creates a link pointing to it, and verifies that reading from the link location returns the original value. This demonstrates the transparent link resolution mechanism. ### list/2 ###

list(Opts::map(), Path::binary()) -> {ok, [binary()]} | {error, term()}

`Path`: Binary prefix to search for
returns: {ok, [Key]} list of matching keys, {error, Reason} on failure List all keys that start with a given prefix. This function provides directory-like navigation by finding all keys that begin with the specified path prefix. It uses LMDB's fold operation to efficiently scan through the database and collect matching keys. The implementation only returns keys that are longer than the prefix itself, ensuring that the prefix acts like a directory separator. For example, listing with prefix "colors" will return "colors/red" and "colors/blue" but not "colors" itself. If the Path points to a link, the function resolves the link and lists the contents of the target directory instead. This is particularly useful for implementing hierarchical data organization and providing tree-like navigation interfaces in applications. ### list_test/0 * ### `list_test() -> any()` List test - verifies prefix-based key listing functionality. This test creates several keys with hierarchical names and verifies that the list operation correctly returns only keys matching a specific prefix. It demonstrates the directory-like navigation capabilities of the store. ### list_with_link_test/0 * ### `list_with_link_test() -> any()` Test that list function resolves links correctly ### make_group/2 ###

make_group(Opts::map(), GroupName::binary()) -> ok | {error, term()}

`Opts`: Database configuration map
`GroupName`: Binary name for the group
returns: Result of the write operation Create a group entry that can contain other keys hierarchically. Groups in the HyperBeam system represent composite entries that can contain child elements, similar to directories in a filesystem. This function creates a group by storing the special value "group" at the specified key. The group mechanism allows applications to organize data hierarchically and provides semantic meaning that can be used by navigation and visualization tools to present appropriate user interfaces. Groups can be identified later using the type/2 function, which will return 'composite' for group entries versus 'simple' for regular key-value pairs. ### make_link/3 ###

make_link(Opts::map(), Existing::binary() | list(), New::binary()) -> ok

`Existing`: The key that already exists and contains the target value
`New`: The new key that should link to the existing key
returns: Result of the write operation Create a symbolic link from a new key to an existing key. This function implements a symbolic link mechanism by storing a special "link:" prefixed value at the new key location. When the new key is read, the system will automatically resolve the link and return the value from the target key instead. Links provide a way to create aliases, shortcuts, or alternative access paths to the same underlying data without duplicating storage. They can be chained together to create complex reference structures, though care should be taken to avoid circular references. The link resolution happens transparently during read operations, making links invisible to most application code while providing powerful organizational capabilities. ### nested_map_cache_test/0 * ### `nested_map_cache_test() -> any()` Test nested map storage with cache-like linking behavior This test demonstrates how to store a nested map structure where: 1. Each value is stored at data/{hash_of_value} 2. Links are created to compose the values back into the original map structure 3. Reading the composed structure reconstructs the original nested map ### notify_flush/1 * ### `notify_flush(State) -> any()` Notify all processes waiting for a flush operation to complete. This function handles the coordination between the server's flush operations and client processes that may be blocked waiting for data to be committed. It uses a non-blocking receive loop to collect all pending flush requests and respond to them immediately. The non-blocking nature (timeout of 0) ensures that the server doesn't get stuck waiting for messages that may not exist, while still handling all queued requests efficiently. ### path/2 ### `path(Opts, PathParts) -> any()` Transform a path into the store's canonical form. For LMDB, paths are simply joined with "/" separators. ### path_traversal_link_test/0 * ### `path_traversal_link_test() -> any()` Path traversal link test - verifies link resolution during path traversal. This test verifies that when reading a path as a list, intermediate path segments that are links get resolved correctly. For example, if "link" is a symbolic link to "group", then reading ["link", "key"] should resolve to reading ["group", "key"]. This functionality enables transparent redirection at the directory level, allowing reorganization of hierarchical data without breaking existing access patterns. ### read/2 ###

read(Opts::map(), PathParts::binary() | list()) -> {ok, binary()} | {error, term()}

`Opts`: Database configuration map
returns: {ok, Value} on success, {error, Reason} on failure Read a value from the database by key, with automatic link resolution. This function attempts to read a value directly from the committed database. If the key is not found, it triggers a flush operation to ensure any pending writes are committed before retrying the read. The function automatically handles link resolution: if a stored value begins with the "link:" prefix, it extracts the target key and recursively reads from that location instead. This creates a symbolic link mechanism that allows multiple keys to reference the same underlying data. When given a list of path segments, the function first attempts a direct read for optimal performance. Only if the direct read fails does it perform link resolution at each level of the path except the final segment, allowing path traversal through symbolic links to work transparently. Link resolution is transparent to the caller and can chain through multiple levels of indirection, though care should be taken to avoid circular references. ### read_direct/2 * ### `read_direct(Opts, Path) -> any()` Read a value directly from the database with link resolution. This is the internal implementation that handles actual database reads. ### read_with_flush/2 * ### `read_with_flush(Opts, Path) -> any()` Read with immediate flush for cases where we need to see recent writes. This is used when we expect the key to exist from a recent write operation. ### read_with_retry/2 * ### `read_with_retry(Opts, Path) -> any()` Unified read function that handles LMDB reads with retry logic. Returns {ok, Value}, not_found, or performs flush and retries. ### read_with_retry/3 * ### `read_with_retry(Opts, Path, RetriesRemaining) -> any()` ### reconstruct_map/2 * ### `reconstruct_map(StoreOpts, Path) -> any()` ### reset/1 ### `reset(Opts) -> any()` Completely delete the database directory and all its contents. This is a destructive operation that removes all data from the specified database. It first performs a graceful shutdown to ensure data consistency, then uses the system shell to recursively delete the entire database directory structure. This function is primarily intended for testing and development scenarios where you need to start with a completely clean database state. It should be used with extreme caution in production environments. ### resolve/2 ###

resolve(Opts::map(), Path::binary() | list()) -> binary()

`Path`: The path to resolve (binary or list)
returns: The resolved path as a binary Resolve a path by following any symbolic links. For LMDB, we handle links through our own "link:" prefix mechanism. This function resolves link chains in paths, similar to filesystem symlink resolution. It's used by the cache to resolve paths before type checking and reading. ### resolve_path_links/2 * ### `resolve_path_links(Opts, Path) -> any()` Resolve links in a path, checking each segment except the last. Returns the resolved path where any intermediate links have been followed. ### resolve_path_links/3 * ### `resolve_path_links(Opts, Path, Depth) -> any()` ### resolve_path_links_acc/4 * ### `resolve_path_links_acc(Opts, Tail, AccPath, Depth) -> any()` ### scope/0 ###

scope() -> local

returns: 'local' always Return the scope of this storage backend. The LMDB implementation is always local-only and does not support distributed operations. This function exists to satisfy the HyperBeam store interface contract and inform the system about the storage backend's capabilities. ### scope/1 ###

scope(X1::term()) -> local

returns: 'local' always Return the scope of this storage backend (ignores parameters). This is an alternate form of scope/0 that ignores any parameters passed to it. The LMDB backend is always local regardless of configuration. ### server/1 * ### `server(State) -> any()` Main server loop that handles database operations and manages transactions. This function implements the core server logic using Erlang's selective receive mechanism. It handles four types of messages: environment requests from readers, write requests that accumulate in transactions, explicit flush requests that commit pending data, and stop messages for graceful shutdown. The server uses a timeout-based flush strategy where it automatically commits transactions after a period of inactivity. This balances write performance (by batching operations) with data safety (by limiting the window of potential data loss). The server maintains its state as a map containing the LMDB environment, current transaction handle, and configuration parameters. State updates are handled functionally by passing modified state maps through tail-recursive calls. ### server_flush/1 * ### `server_flush(RawState) -> any()` Commit the current transaction to disk and clean up state. This function handles the critical operation of persisting accumulated writes to the database. If a transaction is active, it commits the transaction and notifies any processes waiting for the flush to complete. After committing, the server state is cleaned up by removing transaction references, preparing for the next batch of operations. If no transaction is active, the function is a no-op. The notification mechanism ensures that read operations blocked on cache misses can proceed once fresh data is available. ### server_write/3 * ### `server_write(RawState, Key, Value) -> any()` Add a key-value pair to the current transaction, creating one if needed. This function handles write operations by ensuring a transaction is active and then adding the key-value pair to it using LMDB's native interface. If no transaction exists, it creates one automatically. The function uses LMDB's direct NIF interface for maximum performance, bypassing higher-level abstractions that might add overhead. The write is added to the transaction but not committed until a flush occurs. ### start/1 ###

start(Opts::map()) -> {ok, pid()} | {error, term()}

returns: {ok, ServerPid} on success, {error, Reason} on failure Start the LMDB storage system for a given database configuration. This function initializes or connects to an existing LMDB database instance. It uses a singleton pattern, so multiple calls with the same configuration will return the same server process. The server process manages the LMDB environment and coordinates all database operations. The StoreOpts map must contain a "prefix" key specifying the database directory path. Also the required configuration includes "capacity" for the maximum database size and flush timing parameters. ### stop/1 ### `stop(StoreOpts) -> any()` Gracefully shut down the database server and close the environment. This function performs an orderly shutdown of the database system by first stopping the server process (which flushes any pending writes) and then closing the LMDB environment to release system resources. The shutdown process ensures that no data is lost and all file handles are properly closed. After calling stop, the database can be restarted by calling any other function that triggers server creation. ### sync/1 * ###

sync(Opts::map()) -> ok | {error, term()}

returns: 'ok' when flush is complete, {error, Reason} on failure Force an immediate flush of all pending writes to disk. This function synchronously forces the database server to commit any pending writes in the current transaction. It blocks until the flush operation is complete, ensuring that all previously written data is durably stored before returning. This is useful when you need to ensure data is persisted immediately, rather than waiting for the automatic flush timers to trigger. Common use cases include critical checkpoints, before system shutdown, or when preparing for read operations that must see the latest writes. ### to_path/1 * ### `to_path(PathParts) -> any()` Helper function to convert to a path ### type/2 ###

type(Opts::map(), Key::binary()) -> composite | simple | not_found

`Opts`: Database configuration map
`Key`: The key to examine
returns: 'composite' for group entries, 'simple' for regular values Determine whether a key represents a simple value or composite group. This function reads the value associated with a key and examines its content to classify the entry type. Keys storing the literal binary "group" are considered composite (directory-like) entries, while all other values are treated as simple key-value pairs. This classification is used by higher-level HyperBeam components to understand the structure of stored data and provide appropriate navigation interfaces. ### type_test/0 * ### `type_test() -> any()` Type test - verifies type detection for both simple and composite entries. This test creates both a group (composite) entry and a regular (simple) entry, then verifies that the type detection function correctly identifies each one. This demonstrates the semantic classification system used by the store. ### write/3 ###

write(Opts::map(), PathParts::binary() | list(), Value::binary()) -> ok

`Opts`: Database configuration map
`Value`: Binary value to store
returns: 'ok' immediately (write happens asynchronously) Write a key-value pair to the database asynchronously. This function sends a write request to the database server process and returns immediately without waiting for the write to be committed to disk. The server accumulates writes in a transaction that is periodically flushed based on timing constraints or explicit flush requests. The asynchronous nature provides better performance for write-heavy workloads while the batching strategy ensures data consistency and reduces I/O overhead. However, recent writes may not be immediately visible to readers until the next flush occurs. --- END OF FILE: docs/resources/source-code/hb_store_lmdb.md --- --- START OF FILE: docs/resources/source-code/hb_store_lru.md --- # [Module hb_store_lru.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_store_lru.erl) An in-memory store implementation, following the `hb_store` behavior and interface. ## Description ## This implementation uses a least-recently-used cache first, and offloads evicted data to a specified non-volatile store over time. This cache is registered under `{in_memory, HTTPServerID}`, in `hb_name` so that all processes that are executing using the HTTP server’s Opts can find it quickly. The least-recently-used strategy (first is the most recent used, last is the least recently used) is implemented by keeping track of the order and bytes on ets tables: - A cache table containing all the entries along with the value size and key index. - A cache indexing table containing all the index pointing to the keys. The IDs are then sorted to ease the eviction policy. - A cache statistics table containing all the information about the cache size, capacity, and indexing. ## Function Index ##
add_cache_entry/3*
add_cache_index/3*
append_key_to_group/2*
assign_new_entry/7*
cache_size/1*
cache_tail_key/1*
cache_term_test/0*
clean_old_link/2*Remove the link association for the the old linked data to the given key.
convert_if_list/1*
decrease_cache_size/2*
delete_cache_entry/2*
delete_cache_index/2*
ensure_dir/2*
ensure_dir/3*
ets_keys/2*List all of the keys in the store for a given path, supporting a special case for the root.
evict_all_entries/2*
evict_but_able_to_read_from_fs_store_test/0*
evict_items_with_insufficient_space_test/0*
evict_oldest_entry/3*
evict_oldest_entry/4*
evict_oldest_items_test/0*
fetch_cache_with_retry/2*
fetch_cache_with_retry/3*
get_cache_entry/2*
get_index_id/1*
get_persistent_store/1*
handle_group/3*
increase_cache_size/2*
init/2*Create the ets tables for the LRU cache: - The cache of data itself (public, with read concurrency enabled) - A set for the LRU's stats.
join/1*
link_cache_entry/4*
list/2List all the keys registered.
list_test/0*
make_group/2Create a directory inside the store.
make_link/3Make a link from a key to another in the store.
maybe_convert_to_binary/1*
maybe_create_dir/3*
offload_to_store/5*
put_cache_entry/4*
read/2Retrieve value in the cache from the given key.
replace_entry/5*
replace_link_test/0*
reset/1Reset the store by completely cleaning the ETS tables and delegate the reset to the underlying offloading store.
reset_test/0*
resolve/2
resolve/3*
scope/1The LRU store is always local, for now.
server_loop/2*
start/1The default capacity is used when no capacity is provided in the store options.
stop/1Stop the LRU in memory by offloading the keys in the ETS tables before exiting the process.
stop_test/0*
sync/1*Force the caller to wait until the server has fully processed all messages in its mailbox, up to the initiation of the call.
table_keys/1*
table_keys/2*
table_keys/4*
test_opts/1*Generate a set of options for testing.
test_opts/2*
type/2Determine the type of a key in the store.
type_test/0*
unknown_value_test/0*
update_cache_size/3*
update_recently_used/3*
write/3Write an entry in the cache.
## Function Details ## ### add_cache_entry/3 * ### `add_cache_entry(X1, Key, Value) -> any()` ### add_cache_index/3 * ### `add_cache_index(X1, ID, Key) -> any()` ### append_key_to_group/2 * ### `append_key_to_group(Key, Group) -> any()` ### assign_new_entry/7 * ### `assign_new_entry(State, Key, Value, ValueSize, Capacity, Group, Opts) -> any()` ### cache_size/1 * ### `cache_size(X1) -> any()` ### cache_tail_key/1 * ### `cache_tail_key(X1) -> any()` ### cache_term_test/0 * ### `cache_term_test() -> any()` ### clean_old_link/2 * ### `clean_old_link(Table, Link) -> any()` Remove the link association for the the old linked data to the given key ### convert_if_list/1 * ### `convert_if_list(Value) -> any()` ### decrease_cache_size/2 * ### `decrease_cache_size(X1, Size) -> any()` ### delete_cache_entry/2 * ### `delete_cache_entry(X1, Key) -> any()` ### delete_cache_index/2 * ### `delete_cache_index(X1, ID) -> any()` ### ensure_dir/2 * ### `ensure_dir(State, Path) -> any()` ### ensure_dir/3 * ### `ensure_dir(State, CurrentPath, Rest) -> any()` ### ets_keys/2 * ### `ets_keys(Opts, Path) -> any()` List all of the keys in the store for a given path, supporting a special case for the root. ### evict_all_entries/2 * ### `evict_all_entries(X1, Opts) -> any()` ### evict_but_able_to_read_from_fs_store_test/0 * ### `evict_but_able_to_read_from_fs_store_test() -> any()` ### evict_items_with_insufficient_space_test/0 * ### `evict_items_with_insufficient_space_test() -> any()` ### evict_oldest_entry/3 * ### `evict_oldest_entry(State, ValueSize, Opts) -> any()` ### evict_oldest_entry/4 * ### `evict_oldest_entry(State, ValueSize, FreeSize, Opts) -> any()` ### evict_oldest_items_test/0 * ### `evict_oldest_items_test() -> any()` ### fetch_cache_with_retry/2 * ### `fetch_cache_with_retry(Opts, Key) -> any()` ### fetch_cache_with_retry/3 * ### `fetch_cache_with_retry(Opts, Key, Retries) -> any()` ### get_cache_entry/2 * ### `get_cache_entry(Table, Key) -> any()` ### get_index_id/1 * ### `get_index_id(X1) -> any()` ### get_persistent_store/1 * ### `get_persistent_store(Opts) -> any()` ### handle_group/3 * ### `handle_group(State, Key, Opts) -> any()` ### increase_cache_size/2 * ### `increase_cache_size(X1, ValueSize) -> any()` ### init/2 * ### `init(From, StoreOpts) -> any()` Create the `ets` tables for the LRU cache: - The cache of data itself (public, with read concurrency enabled) - A set for the LRU's stats. - An ordered set for the cache's index. ### join/1 * ### `join(Key) -> any()` ### link_cache_entry/4 * ### `link_cache_entry(State, Existing, New, Opts) -> any()` ### list/2 ### `list(Opts, Path) -> any()` List all the keys registered. ### list_test/0 * ### `list_test() -> any()` ### make_group/2 ### `make_group(Opts, Key) -> any()` Create a directory inside the store. ### make_link/3 ### `make_link(Opts, Link, New) -> any()` Make a link from a key to another in the store. ### maybe_convert_to_binary/1 * ### `maybe_convert_to_binary(Value) -> any()` ### maybe_create_dir/3 * ### `maybe_create_dir(State, DirPath, Value) -> any()` ### offload_to_store/5 * ### `offload_to_store(TailKey, TailValue, Links, Group, Opts) -> any()` ### put_cache_entry/4 * ### `put_cache_entry(State, Key, Value, Opts) -> any()` ### read/2 ### `read(Opts, RawKey) -> any()` Retrieve value in the cache from the given key. Because the cache uses LRU, the key is moved on the most recent used key to cycle and re-prioritize cache entry. ### replace_entry/5 * ### `replace_entry(State, Key, Value, ValueSize, X5) -> any()` ### replace_link_test/0 * ### `replace_link_test() -> any()` ### reset/1 ### `reset(Opts) -> any()` Reset the store by completely cleaning the ETS tables and delegate the reset to the underlying offloading store. ### reset_test/0 * ### `reset_test() -> any()` ### resolve/2 ### `resolve(Opts, Key) -> any()` ### resolve/3 * ### `resolve(Opts, CurrPath, Rest) -> any()` ### scope/1 ### `scope(X1) -> any()` The LRU store is always local, for now. ### server_loop/2 * ### `server_loop(State, Opts) -> any()` ### start/1 ### `start(StoreOpts) -> any()` The default capacity is used when no capacity is provided in the store options. Maximum number of retries when fetching cache entries that aren't immediately found due to timing issues in concurrent operations. Start the LRU cache. ### stop/1 ### `stop(Opts) -> any()` Stop the LRU in memory by offloading the keys in the ETS tables before exiting the process. ### stop_test/0 * ### `stop_test() -> any()` ### sync/1 * ### `sync(Server) -> any()` Force the caller to wait until the server has fully processed all messages in its mailbox, up to the initiation of the call. ### table_keys/1 * ### `table_keys(TableName) -> any()` ### table_keys/2 * ### `table_keys(TableName, Prefix) -> any()` ### table_keys/4 * ### `table_keys(TableName, CurrentKey, Prefix, Acc) -> any()` ### test_opts/1 * ### `test_opts(PersistentStore) -> any()` Generate a set of options for testing. The default is to use an `fs` store as the persistent backing. ### test_opts/2 * ### `test_opts(PersistentStore, Capacity) -> any()` ### type/2 ### `type(Opts, Key) -> any()` Determine the type of a key in the store. ### type_test/0 * ### `type_test() -> any()` ### unknown_value_test/0 * ### `unknown_value_test() -> any()` ### update_cache_size/3 * ### `update_cache_size(X1, PreviousSize, NewSize) -> any()` ### update_recently_used/3 * ### `update_recently_used(State, Key, Entry) -> any()` ### write/3 ### `write(Opts, RawKey, Value) -> any()` Write an entry in the cache. After writing, the LRU is updated by moving the key in the most-recently-used key to cycle and re-prioritize cache entry. --- END OF FILE: docs/resources/source-code/hb_store_lru.md --- --- START OF FILE: docs/resources/source-code/hb_store_remote_node.md --- # [Module hb_store_remote_node.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_store_remote_node.erl) A store module that reads data from another AO node. ## Description ## Notably, this store only provides the _read_ side of the store interface. The write side could be added, returning an commitment that the data has been written to the remote node. In that case, the node would probably want to upload it to an Arweave bundler to ensure persistence, too. ## Function Index ##
make_link/3Link a source to a destination in the remote node.
read/2Read a key from the remote node.
read_test/0*Test that we can create a store, write a random message to it, then start a remote node with that store, and read the message from it.
resolve/2Resolve a key path in the remote store.
scope/1Return the scope of this store.
type/2Determine the type of value at a given key.
write/3Write a key to the remote node.
## Function Details ## ### make_link/3 ### `make_link(Opts, Source, Destination) -> any()` Link a source to a destination in the remote node. Constructs an HTTP POST link request. If a wallet is provided, the message is signed. Returns {ok, Path} on HTTP 200, or {error, Reason} on failure. ### read/2 ### `read(Opts, Key) -> any()` Read a key from the remote node. Makes an HTTP GET request to the remote node and returns the committed message. ### read_test/0 * ### `read_test() -> any()` Test that we can create a store, write a random message to it, then start a remote node with that store, and read the message from it. ### resolve/2 ### `resolve(X1, Key) -> any()` Resolve a key path in the remote store. For the remote node store, the key is returned as-is. ### scope/1 ### `scope(Arg) -> any()` Return the scope of this store. For the remote store, the scope is always `remote`. ### type/2 ### `type(Opts, Key) -> any()` Determine the type of value at a given key. Remote nodes support only the `simple` type or `not_found`. ### write/3 ### `write(Opts, Key, Value) -> any()` Write a key to the remote node. Constructs an HTTP POST write request. If a wallet is provided, the message is signed. Returns {ok, Path} on HTTP 200, or {error, Reason} on failure. --- END OF FILE: docs/resources/source-code/hb_store_remote_node.md --- --- START OF FILE: docs/resources/source-code/hb_store_rocksdb.md --- # [Module hb_store_rocksdb.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_store_rocksdb.erl) A process wrapper over rocksdb storage. __Behaviours:__ [`gen_server`](gen_server.md), [`hb_store`](hb_store.md). ## Description ## Replicates functionality of the hb_fs_store module. Encodes the item types with the help of prefixes, see `encode_value/2` and `decode_value/1` ## Data Types ## ### key() ###

key() = binary() | list()
### value() ###

value() = binary() | list()
### value_type() ###

value_type() = link | raw | group
## Function Index ##
add_path/3Add two path components together.
code_change/3
collect/1*
collect/2*
convert_if_list/1*
decode_value/1*
do_read/2*
do_resolve/3*
do_write/3*Write given Key and Value to the database.
enabled/0Returns whether the RocksDB store is enabled.
encode_value/2*
ensure_dir/2*
ensure_dir/3*
ensure_list/1*Ensure that the given filename is a list, not a binary.
handle_call/3
handle_cast/2
handle_info/2
init/1
join/1*
list/0List all items registered in rocksdb store.
list/2Returns the full list of items stored under the given path.
make_group/2Creates group under the given path.
make_link/3
maybe_append_key_to_group/2*
maybe_convert_to_binary/1*
maybe_create_dir/3*
open_rockdb/1*
path/2Return path.
read/2Read data by the key.
reset/1
resolve/2Replace links in a path with the target of the link.
scope/1Return scope (local).
start/1
start_link/1Start the RocksDB store.
stop/1
terminate/2
type/2Get type of the current item.
write/3Write given Key and Value to the database.
## Function Details ## ### add_path/3 ### `add_path(Opts, Path1, Path2) -> any()` Add two path components together. // is not used ### code_change/3 ### `code_change(OldVsn, State, Extra) -> any()` ### collect/1 * ### `collect(Iterator) -> any()` ### collect/2 * ### `collect(Iterator, Acc) -> any()` ### convert_if_list/1 * ### `convert_if_list(Value) -> any()` ### decode_value/1 * ###

decode_value(X1::binary()) -> {value_type(), binary()}

### do_read/2 * ### `do_read(Opts, Key) -> any()` ### do_resolve/3 * ### `do_resolve(Opts, FinalPath, Rest) -> any()` ### do_write/3 * ###

do_write(Opts, Key, Value) -> Result
  • Opts = map()
  • Key = key()
  • Value = value()
  • Result = ok | {error, any()}
Write given Key and Value to the database ### enabled/0 ### `enabled() -> any()` Returns whether the RocksDB store is enabled. ### encode_value/2 * ###

encode_value(X1::value_type(), Value::binary()) -> binary()

### ensure_dir/2 * ### `ensure_dir(DBHandle, BaseDir) -> any()` ### ensure_dir/3 * ### `ensure_dir(DBHandle, CurrentPath, Rest) -> any()` ### ensure_list/1 * ### `ensure_list(Value) -> any()` Ensure that the given filename is a list, not a binary. ### handle_call/3 ### `handle_call(Request, From, State) -> any()` ### handle_cast/2 ### `handle_cast(Request, State) -> any()` ### handle_info/2 ### `handle_info(Info, State) -> any()` ### init/1 ### `init(Dir) -> any()` ### join/1 * ### `join(Key) -> any()` ### list/0 ### `list() -> any()` List all items registered in rocksdb store. Should be used only for testing/debugging, as the underlying operation is doing full traversal on the KV storage, and is slow. ### list/2 ###

list(Opts, Path) -> Result
  • Opts = any()
  • Path = any()
  • Result = {ok, [string()]} | {error, term()}
Returns the full list of items stored under the given path. Where the path child items is relevant to the path of parentItem. (Same as in `hb_store_fs`). ### make_group/2 ###

make_group(Opts, Key) -> Result
  • Opts = any()
  • Key = binary()
  • Result = ok | {error, already_added}
Creates group under the given path. ### make_link/3 ###

make_link(Opts::any(), Key1::key(), New::key()) -> ok

### maybe_append_key_to_group/2 * ### `maybe_append_key_to_group(Key, CurrentDirContents) -> any()` ### maybe_convert_to_binary/1 * ### `maybe_convert_to_binary(Value) -> any()` ### maybe_create_dir/3 * ### `maybe_create_dir(DBHandle, DirPath, Value) -> any()` ### open_rockdb/1 * ### `open_rockdb(RawDir) -> any()` ### path/2 ### `path(Opts, Path) -> any()` Return path ### read/2 ###

read(Opts, Key) -> Result
  • Opts = map()
  • Key = key() | list()
  • Result = {ok, value()} | not_found | {error, {corruption, string()}} | {error, any()}
Read data by the key. Recursively follows link messages ### reset/1 ###

reset(Opts::[]) -> ok | no_return()

### resolve/2 ###

resolve(Opts, Path) -> Result
  • Opts = any()
  • Path = binary() | list()
  • Result = not_found | string()
Replace links in a path with the target of the link. ### scope/1 ### `scope(X1) -> any()` Return scope (local) ### start/1 ### `start(Opts) -> any()` ### start_link/1 ### `start_link(Opts) -> any()` Start the RocksDB store. ### stop/1 ###

stop(Opts::any()) -> ok

### terminate/2 ### `terminate(Reason, State) -> any()` ### type/2 ###

type(Opts, Key) -> Result
  • Opts = map()
  • Key = binary()
  • Result = composite | simple | not_found
Get type of the current item ### write/3 ###

write(Opts, Key, Value) -> Result
  • Opts = map()
  • Key = key()
  • Value = value()
  • Result = ok | {error, any()}
Write given Key and Value to the database --- END OF FILE: docs/resources/source-code/hb_store_rocksdb.md --- --- START OF FILE: docs/resources/source-code/hb_store.md --- # [Module hb_store.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_store.erl) A simple abstraction layer for AO key value store operations. ## Description ## This interface allows us to swap out the underlying store implementation(s) as desired, without changing the API that `hb_cache` employs. Additionally, it enables node operators to customize their configuration to maximize performance, data availability, and other factors. Stores can be represented in a node's configuration as either a single message, or a (`structured@1.0`) list of store messages. If a list of stores is provided, the node will cycle through each until a viable store is found to execute the given function. A valid store must implement a _subset_ of the following functions: ``start/1: Initialize the store. stop/1: Stop any processes (etc.) that manage the store. reset/1: Restore the store to its original, empty state. scope/0: A tag describing the`scope' of a stores search: `in_memory`, `local`, `remote`, `arweave`, etc. Used in order to allow node operators to prioritize their stores for search. make_group/2: Create a new group of keys in the store with the given ID. make_link/3: Create a link (implying one key should redirect to another) from `existing` to `new` (in that order). type/2: Return whether the value found at the given key is a `composite` (group) type, or a `simple` direct binary. read/2: Read the data at the given location, returning a binary if it is a `simple` value, or a message if it is a complex term. write/3: Write the given `key` with the associated `value` (in that order) to the store. list/2: For `composite` type keys, return a list of its child keys. path/2: Optionally transform a list of path parts into the store's canonical form. ''' Each function takes a `store` message first, containing an arbitrary set of its necessary configuration keys, as well as the `store-module` key which refers to the Erlang module that implements the store. All functions must return `ok` or `{ok, Result}`, as appropriate. Other results will lead to the store manager (this module) iterating to the next store message given by the user. If none of the given store messages are able to execute a requested service, the store manager will return `not_found`. ## Function Index ##
add_path/2Add two path components together.
add_path/3
behavior_info/1
benchmark_key_read_write/1*Benchmark a store.
benchmark_key_read_write/3*
benchmark_message_read_write/1*
benchmark_message_read_write/3*
benchmark_suite_test_/0*
call_all/3*Call a function on all modules in the store.
call_function/3*Call a function on the first store module that succeeds.
do_call_function/3*
do_find/1*
ensure_instance_alive/2*Handle a found instance message.
filter/2Takes a store object and a filter function or match spec, returning a new store object with only the modules that match the filter.
find/1Find or spawn a store instance by its store opts.
generate_test_suite/1
generate_test_suite/2
get_store_scope/1*Ask a store for its own scope.
hierarchical_path_resolution_test/1*Ensure that we can resolve links through a directory.
join/1Join a list of path components together.
list/2List the keys in a group in the store.
make_group/2Make a group in the store.
make_link/3Make a link from one path to another in the store.
path/1Create a path from a list of path components.
path/2
read/2Read a key from the store.
reset/1Delete all of the keys in a store.
resolve/2Follow links through the store to resolve a path to its ultimate target.
resursive_path_resolution_test/1*Ensure that we can resolve links recursively.
rocks_stores/0*
scope/2Limit the store scope to only a specific (set of) option(s).
simple_path_resolution_test/1*Test path resolution dynamics.
sort/2Order a store by a preference of its scopes.
spawn_instance/1*Create a new instance of a store and return its term.
start/1Ensure that a store, or list of stores, have all been started.
stop/1
store_suite_test_/0*
test_stores/0Return a list of stores for testing.
type/2Get the type of element of a given path in the store.
write/3Write a key with a value to the store.
## Function Details ## ### add_path/2 ### `add_path(Path1, Path2) -> any()` Add two path components together. If no store implements the add_path function, we concatenate the paths. ### add_path/3 ### `add_path(Store, Path1, Path2) -> any()` ### behavior_info/1 ### `behavior_info(X1) -> any()` ### benchmark_key_read_write/1 * ### `benchmark_key_read_write(Store) -> any()` Benchmark a store. By default, we write 10,000 keys and read 10,000 keys. This can be altered by setting the `STORE_BENCH_WRITE_OPS` and `STORE_BENCH_READ_OPS` macros. ### benchmark_key_read_write/3 * ### `benchmark_key_read_write(Store, WriteOps, ReadOps) -> any()` ### benchmark_message_read_write/1 * ### `benchmark_message_read_write(Store) -> any()` ### benchmark_message_read_write/3 * ### `benchmark_message_read_write(Store, WriteOps, ReadOps) -> any()` ### benchmark_suite_test_/0 * ### `benchmark_suite_test_() -> any()` ### call_all/3 * ### `call_all(X, Function, Args) -> any()` Call a function on all modules in the store. ### call_function/3 * ### `call_function(X, Function, Args) -> any()` Call a function on the first store module that succeeds. Returns its result, or `not_found` if none of the stores succeed. If `TIME_CALLS` is set, this function will also time the call and increment the appropriate event counter. ### do_call_function/3 * ### `do_call_function(X, Function, Args) -> any()` ### do_find/1 * ### `do_find(StoreOpts) -> any()` ### ensure_instance_alive/2 * ### `ensure_instance_alive(StoreOpts, InstanceMessage) -> any()` Handle a found instance message. If it contains a PID, we check if it is alive. If it does not, we return it as is. ### filter/2 ### `filter(Module, Filter) -> any()` Takes a store object and a filter function or match spec, returning a new store object with only the modules that match the filter. The filter function takes 2 arguments: the scope and the options. It calls the store's scope function to get the scope of the module. ### find/1 ### `find(StoreOpts) -> any()` Find or spawn a store instance by its store opts. ### generate_test_suite/1 ### `generate_test_suite(Suite) -> any()` ### generate_test_suite/2 ### `generate_test_suite(Suite, Stores) -> any()` ### get_store_scope/1 * ### `get_store_scope(Store) -> any()` Ask a store for its own scope. If it doesn't have one, return the default scope (local). ### hierarchical_path_resolution_test/1 * ### `hierarchical_path_resolution_test(Store) -> any()` Ensure that we can resolve links through a directory. ### join/1 ### `join(Path) -> any()` Join a list of path components together. ### list/2 ### `list(Modules, Path) -> any()` List the keys in a group in the store. Use only in debugging. The hyperbeam model assumes that stores are built as efficient hash-based structures, so this is likely to be very slow for most stores. ### make_group/2 ### `make_group(Modules, Path) -> any()` Make a group in the store. A group can be seen as a namespace or 'directory' in a filesystem. ### make_link/3 ### `make_link(Modules, Existing, New) -> any()` Make a link from one path to another in the store. ### path/1 ### `path(Path) -> any()` Create a path from a list of path components. If no store implements the path function, we return the path with the 'default' transformation (id). ### path/2 ### `path(X1, Path) -> any()` ### read/2 ### `read(Modules, Key) -> any()` Read a key from the store. ### reset/1 ### `reset(Modules) -> any()` Delete all of the keys in a store. Should be used with extreme caution. Lost data can lose money in many/most of hyperbeam's use cases. ### resolve/2 ### `resolve(Modules, Path) -> any()` Follow links through the store to resolve a path to its ultimate target. ### resursive_path_resolution_test/1 * ### `resursive_path_resolution_test(Store) -> any()` Ensure that we can resolve links recursively. ### rocks_stores/0 * ### `rocks_stores() -> any()` ### scope/2 ### `scope(Opts, Scope) -> any()` Limit the store scope to only a specific (set of) option(s). Takes either an Opts message or store, and either a single scope or a list of scopes. ### simple_path_resolution_test/1 * ### `simple_path_resolution_test(Store) -> any()` Test path resolution dynamics. ### sort/2 ### `sort(Stores, PreferenceOrder) -> any()` Order a store by a preference of its scopes. This is useful for making sure that faster (or perhaps cheaper) stores are used first. If a list is provided, it will be used as a preference order. If a map is provided, scopes will be ordered by the scores in the map. Any unknown scopes will default to a score of 0. ### spawn_instance/1 * ### `spawn_instance(StoreOpts) -> any()` Create a new instance of a store and return its term. ### start/1 ### `start(StoreOpts) -> any()` Ensure that a store, or list of stores, have all been started. ### stop/1 ### `stop(Modules) -> any()` ### store_suite_test_/0 * ### `store_suite_test_() -> any()` ### test_stores/0 ### `test_stores() -> any()` Return a list of stores for testing. Additional individual functions are used to generate store options for those whose drivers are not built by default into all HyperBEAM distributions. ### type/2 ### `type(Modules, Path) -> any()` Get the type of element of a given path in the store. This can be a performance killer if the store is remote etc. Use only when necessary. ### write/3 ### `write(Modules, Key, Value) -> any()` Write a key with a value to the store. --- END OF FILE: docs/resources/source-code/hb_store.md --- --- START OF FILE: docs/resources/source-code/hb_structured_fields.md --- # [Module hb_structured_fields.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_structured_fields.erl) A module for parsing and converting between Erlang and HTTP Structured Fields, as described in RFC-9651. ## Description ## The mapping between Erlang and structured headers types is as follow: List: list() Inner list: {list, [item()], params()} Dictionary: [{binary(), item()}] There is no distinction between empty list and empty dictionary. Item with parameters: {item, bare_item(), params()} Parameters: [{binary(), bare_item()}] Bare item: one bare_item() that can be of type: Integer: integer() Decimal: {decimal, {integer(), integer()}} String: {string, binary()} Token: {token, binary()} Byte sequence: {binary, binary()} Boolean: boolean() ## Data Types ## ### sh_bare_item() ###

sh_bare_item() = integer() | sh_decimal() | boolean() | {string | token | binary, binary()}
### sh_decimal() ###

sh_decimal() = {decimal, {integer(), integer()}}
### sh_dictionary() ###

sh_dictionary() = [{binary(), sh_item() | sh_inner_list()}]
### sh_inner_list() ###

sh_inner_list() = {list, [sh_item()], sh_params()}
### sh_item() ###

sh_item() = {item, sh_bare_item(), sh_params()}
### sh_list() ###

sh_list() = [sh_item() | sh_inner_list()]
### sh_params() ###

sh_params() = [{binary(), sh_bare_item()}]
## Function Index ##
bare_item/1
dictionary/1
e2t/1*
e2tb/1*
e2tp/1*
escape_string/2*
exp_div/1*
expected_to_term/1*
from_bare_item/1Convert an SF bare_item to an Erlang term.
inner_list/1*
item/1
item_or_inner_list/1*
key_to_binary/1*Convert an Erlang term to a binary key.
list/1
params/1*
parse_bare_item/1Parse an integer or decimal.
parse_before_param/2*
parse_binary/1Parse a byte sequence binary.
parse_binary/2*
parse_decimal/5*Parse a decimal binary.
parse_dict_before_member/2*Parse a binary SF dictionary before a member.
parse_dict_before_sep/2*Parse a binary SF dictionary before a separator.
parse_dict_key/3*
parse_dictionary/1Parse a binary SF dictionary.
parse_inner_list/2*
parse_item/1Parse a binary SF item to an SF item.
parse_item1/1*
parse_list/1Parse a binary SF list.
parse_list_before_member/2*Parse a binary SF list before a member.
parse_list_before_sep/2*Parse a binary SF list before a separator.
parse_list_member/2*Parse a binary SF list before a member.
parse_number/3*Parse an integer or decimal binary.
parse_param/3*
parse_string/2*Parse a string binary.
parse_struct_hd_test_/0*
parse_token/2*Parse a token binary.
raw_to_binary/1*
to_bare_item/1*Convert an Erlang term to an SF bare_item.
to_dictionary/1Convert a map to a dictionary.
to_dictionary/2*
to_dictionary_depth_test/0*
to_dictionary_test/0*
to_inner_item/1*Convert an Erlang term to an SF item.
to_inner_list/1*Convert an inner list to an SF term.
to_inner_list/2*
to_inner_list/3*
to_item/1Convert an item to a dictionary.
to_item/2
to_item_or_inner_list/1*Convert an Erlang term to an SF item or inner_list.
to_item_test/0*
to_list/1Convert a list to an SF term.
to_list/2*
to_list_depth_test/0*
to_list_test/0*
to_param/1*Convert an Erlang term to an SF parameter.
trim_ws/1*
trim_ws_end/2*
## Function Details ## ### bare_item/1 ### `bare_item(Integer) -> any()` ### dictionary/1 ###

dictionary(Map::#{binary() => sh_item() | sh_inner_list()} | sh_dictionary()) -> iolist()

### e2t/1 * ### `e2t(Dict) -> any()` ### e2tb/1 * ### `e2tb(V) -> any()` ### e2tp/1 * ### `e2tp(Params) -> any()` ### escape_string/2 * ### `escape_string(X1, Acc) -> any()` ### exp_div/1 * ### `exp_div(N) -> any()` ### expected_to_term/1 * ### `expected_to_term(Dict) -> any()` ### from_bare_item/1 ### `from_bare_item(BareItem) -> any()` Convert an SF `bare_item` to an Erlang term. ### inner_list/1 * ### `inner_list(X1) -> any()` ### item/1 ###

item(X1::sh_item()) -> iolist()

### item_or_inner_list/1 * ### `item_or_inner_list(Value) -> any()` ### key_to_binary/1 * ### `key_to_binary(Key) -> any()` Convert an Erlang term to a binary key. ### list/1 ###

list(List::sh_list()) -> iolist()

### params/1 * ### `params(Params) -> any()` ### parse_bare_item/1 ### `parse_bare_item(X1) -> any()` Parse an integer or decimal. ### parse_before_param/2 * ### `parse_before_param(X1, Acc) -> any()` ### parse_binary/1 ### `parse_binary(Bin) -> any()` Parse a byte sequence binary. ### parse_binary/2 * ### `parse_binary(X1, Acc) -> any()` ### parse_decimal/5 * ### `parse_decimal(R, L1, L2, IntAcc, FracAcc) -> any()` Parse a decimal binary. ### parse_dict_before_member/2 * ### `parse_dict_before_member(X1, Acc) -> any()` Parse a binary SF dictionary before a member. ### parse_dict_before_sep/2 * ### `parse_dict_before_sep(X1, Acc) -> any()` Parse a binary SF dictionary before a separator. ### parse_dict_key/3 * ### `parse_dict_key(R, Acc, K) -> any()` ### parse_dictionary/1 ###

parse_dictionary(X1::binary()) -> sh_dictionary()

Parse a binary SF dictionary. ### parse_inner_list/2 * ### `parse_inner_list(R0, Acc) -> any()` ### parse_item/1 ###

parse_item(Bin::binary()) -> sh_item()

Parse a binary SF item to an SF `item`. ### parse_item1/1 * ### `parse_item1(Bin) -> any()` ### parse_list/1 ###

parse_list(Bin::binary()) -> sh_list()

Parse a binary SF list. ### parse_list_before_member/2 * ### `parse_list_before_member(R, Acc) -> any()` Parse a binary SF list before a member. ### parse_list_before_sep/2 * ### `parse_list_before_sep(X1, Acc) -> any()` Parse a binary SF list before a separator. ### parse_list_member/2 * ### `parse_list_member(R0, Acc) -> any()` Parse a binary SF list before a member. ### parse_number/3 * ### `parse_number(R, L, Acc) -> any()` Parse an integer or decimal binary. ### parse_param/3 * ### `parse_param(R, Acc, K) -> any()` ### parse_string/2 * ### `parse_string(X1, Acc) -> any()` Parse a string binary. ### parse_struct_hd_test_/0 * ### `parse_struct_hd_test_() -> any()` ### parse_token/2 * ### `parse_token(R, Acc) -> any()` Parse a token binary. ### raw_to_binary/1 * ### `raw_to_binary(RawList) -> any()` ### to_bare_item/1 * ### `to_bare_item(BareItem) -> any()` Convert an Erlang term to an SF `bare_item`. ### to_dictionary/1 ### `to_dictionary(Map) -> any()` Convert a map to a dictionary. ### to_dictionary/2 * ### `to_dictionary(Dict, Rest) -> any()` ### to_dictionary_depth_test/0 * ### `to_dictionary_depth_test() -> any()` ### to_dictionary_test/0 * ### `to_dictionary_test() -> any()` ### to_inner_item/1 * ### `to_inner_item(Item) -> any()` Convert an Erlang term to an SF `item`. ### to_inner_list/1 * ### `to_inner_list(Inner) -> any()` Convert an inner list to an SF term. ### to_inner_list/2 * ### `to_inner_list(Inner, Params) -> any()` ### to_inner_list/3 * ### `to_inner_list(Inner, Rest, Params) -> any()` ### to_item/1 ### `to_item(Item) -> any()` Convert an item to a dictionary. ### to_item/2 ### `to_item(Item, Params) -> any()` ### to_item_or_inner_list/1 * ### `to_item_or_inner_list(ItemOrInner) -> any()` Convert an Erlang term to an SF `item` or `inner_list`. ### to_item_test/0 * ### `to_item_test() -> any()` ### to_list/1 ### `to_list(List) -> any()` Convert a list to an SF term. ### to_list/2 * ### `to_list(Acc, Rest) -> any()` ### to_list_depth_test/0 * ### `to_list_depth_test() -> any()` ### to_list_test/0 * ### `to_list_test() -> any()` ### to_param/1 * ### `to_param(X1) -> any()` Convert an Erlang term to an SF `parameter`. ### trim_ws/1 * ### `trim_ws(R) -> any()` ### trim_ws_end/2 * ### `trim_ws_end(Value, N) -> any()` --- END OF FILE: docs/resources/source-code/hb_structured_fields.md --- --- START OF FILE: docs/resources/source-code/hb_sup.md --- # [Module hb_sup.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_sup.erl) __Behaviours:__ [`supervisor`](supervisor.md). ## Function Index ##
init/1
start_link/0
start_link/1
store_children/1*Generate a child spec for stores in the given Opts.
## Function Details ## ### init/1 ### `init(Opts) -> any()` ### start_link/0 ### `start_link() -> any()` ### start_link/1 ### `start_link(Opts) -> any()` ### store_children/1 * ### `store_children(Store) -> any()` Generate a child spec for stores in the given Opts. --- END OF FILE: docs/resources/source-code/hb_sup.md --- --- START OF FILE: docs/resources/source-code/hb_test_utils.md --- # [Module hb_test_utils.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_test_utils.erl) Simple utilities for testing HyperBEAM. ## Function Index ##
run/4
satisfies_requirements/1*Determine if the environment satisfies the given test requirements.
suite_with_opts/2Run each test in a suite with each set of options.
test_store/0Generate a new, unique test store as an isolated context for an execution.
## Function Details ## ### run/4 ### `run(Name, OptsName, Suite, OptsList) -> any()` ### satisfies_requirements/1 * ### `satisfies_requirements(Requirements) -> any()` Determine if the environment satisfies the given test requirements. Requirements is a list of atoms, each corresponding to a module that must return true if it exposes an `enabled/0` function. ### suite_with_opts/2 ### `suite_with_opts(Suite, OptsList) -> any()` Run each test in a suite with each set of options. Start and reset the store(s) for each test. Expects suites to be a list of tuples with the test name, description, and test function. The list of `Opts` should contain maps with the `name` and `opts` keys. Each element may also contain a `skip` key with a list of test names to skip. They can also contain a `desc` key with a description of the options. ### test_store/0 ### `test_store() -> any()` Generate a new, unique test store as an isolated context for an execution. --- END OF FILE: docs/resources/source-code/hb_test_utils.md --- --- START OF FILE: docs/resources/source-code/hb_tracer.md --- # [Module hb_tracer.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_tracer.erl) A module for tracing the flow of requests through the system. ## Description ## This allows for tracking the lifecycle of a request from HTTP receipt through processing and response. ## Function Index ##
checkmark_emoji/0*
failure_emoji/0*
format_error_trace/1Format a trace for error in a user-friendly emoji oriented output.
get_trace/1Exports the complete queue of events.
record_step/2Register a new step into a tracer.
stage_to_emoji/1*
start_trace/0Start a new tracer acting as queue of events registered.
trace_loop/1*
## Function Details ## ### checkmark_emoji/0 * ### `checkmark_emoji() -> any()` ### failure_emoji/0 * ### `failure_emoji() -> any()` ### format_error_trace/1 ### `format_error_trace(Trace) -> any()` Format a trace for error in a user-friendly emoji oriented output ### get_trace/1 ### `get_trace(TracePID) -> any()` Exports the complete queue of events ### record_step/2 ### `record_step(TracePID, Step) -> any()` Register a new step into a tracer ### stage_to_emoji/1 * ### `stage_to_emoji(Stage) -> any()` ### start_trace/0 ### `start_trace() -> any()` Start a new tracer acting as queue of events registered. ### trace_loop/1 * ### `trace_loop(Trace) -> any()` --- END OF FILE: docs/resources/source-code/hb_tracer.md --- --- START OF FILE: docs/resources/source-code/hb_util.md --- # [Module hb_util.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_util.erl) A collection of utility functions for building with HyperBEAM. ## Function Index ##
add_commas/1*
addresses_to_binary/1*Serialize the given list of addresses to a binary, using the structured fields format.
all_hb_modules/0Get all loaded modules that are loaded and are part of HyperBEAM.
atom/1Coerce a string to an atom.
bin/1Coerce a value to a binary.
binary_to_addresses/1Parse a list from a binary.
count/2
debug_fmt/1Convert a term to a string for debugging print purposes.
debug_fmt/2
debug_print/4Print a message to the standard error stream, prefixed by the amount of time that has elapsed since the last call to this function.
decode/1Try to decode a URL safe base64 into a binary or throw an error when invalid.
deep_merge/3Deep merge two maps, recursively merging nested maps.
do_debug_fmt/2*
do_to_lines/1*
encode/1Encode a binary to URL safe base64 binary string.
eunit_print/2Format and print an indented string to standard error.
find_value/2Find the value associated with a key in parsed a JSON structure list.
find_value/3
find_value/4*
float/1Coerce a string to a float.
format_address/2*If the user attempts to print a wallet, format it as an address.
format_binary/1Format a binary as a short string suitable for printing.
format_debug_trace/3*Generate the appropriate level of trace for a given call.
format_indented/2Format a string with an indentation level.
format_indented/3
format_maybe_multiline/2Format a map as either a single line or a multi-line string depending on the value of the debug_print_map_line_threshold runtime option.
format_trace/1Format a stack trace as a list of strings, one for each stack frame.
format_trace/2*
format_trace_short/1Format a trace to a short string.
format_trace_short/4*
format_tuple/2*Helper function to format tuples with arity greater than 2.
get_trace/0*Get the trace of the current process.
hd/1Get the first element (the lowest integer key >= 1) of a numbered map.
hd/2
hd/3
hd/5*
human_id/1Convert a native binary ID to a human readable ID.
human_int/1Add , characters to a number every 3 digits to make it human readable.
id/1Return the human-readable form of an ID of a message when given either a message explicitly, raw encoded ID, or an Erlang Arweave tx record.
id/2
int/1Coerce a string to an integer.
is_hb_module/1Is the given module part of HyperBEAM?.
is_hb_module/2
is_human_binary/1*Determine whether a binary is human-readable.
is_ordered_list/2Determine if the message given is an ordered list, starting from 1.
is_ordered_list/3*
is_string_list/1Is the given term a string list?.
key_to_atom/2Convert keys in a map to atoms, lowering - to _.
list/1Coerce a value to a list.
list_intersection/2*Returns the intersection of two lists, with stable ordering.
list_replace/3Replace a key in a list with a new value.
list_to_numbered_message/1Convert a list of elements to a map with numbered keys.
maybe_throw/2Throw an exception if the Opts map has an error_strategy key with the value throw.
mean/1
message_to_ordered_list/1Take a message with numbered keys and convert it to a list of tuples with the associated key as an integer.
message_to_ordered_list/2
message_to_ordered_list/4*
native_id/1Convert a human readable ID to a native binary ID.
normalize_trace/1*Remove all calls from this module from the top of a trace.
number/1Label a list of elements with a number.
ok/1Unwrap a tuple of the form {ok, Value}, or throw/return, depending on the value of the error_strategy option.
ok/2
pick_weighted/2*Pick a random element from a list, weighted by the values in the list.
print_trace/3*
print_trace/4Print the trace of the current stack, up to the first non-hyperbeam module.
print_trace_short/4Print a trace to the standard error stream.
remove_common/2Remove the common prefix from two strings, returning the remainder of the first string.
remove_trailing_noise/1*
remove_trailing_noise/2
safe_decode/1Safely decode a URL safe base64 into a binary returning an ok or error tuple.
safe_encode/1Safely encode a binary to URL safe base64.
server_id/0*Retreive the server ID of the calling process, if known.
server_id/1*
short_id/1Return a short ID for the different types of IDs used in AO-Core.
shuffle/1*Shuffle a list.
stddev/1
to_hex/1Convert a binary to a hex string.
to_lines/1*
to_lower/1Convert a binary to a lowercase.
to_sorted_keys/1Given a map or KVList, return a deterministically ordered list of its keys.
to_sorted_keys/2
to_sorted_list/1Given a map or KVList, return a deterministically sorted list of its key-value pairs.
to_sorted_list/2
trace_macro_helper/5Utility function to help macro ?trace/0 remove the first frame of the stack trace.
unique/1Take a list and return a list of unique elements.
until/1Utility function to wait for a condition to be true.
until/2
until/3
variance/1
weighted_random/1Return a random element from a list, weighted by the values in the list.
## Function Details ## ### add_commas/1 * ### `add_commas(Rest) -> any()` ### addresses_to_binary/1 * ### `addresses_to_binary(List) -> any()` Serialize the given list of addresses to a binary, using the structured fields format. ### all_hb_modules/0 ### `all_hb_modules() -> any()` Get all loaded modules that are loaded and are part of HyperBEAM. ### atom/1 ### `atom(Str) -> any()` Coerce a string to an atom. ### bin/1 ### `bin(Value) -> any()` Coerce a value to a binary. ### binary_to_addresses/1 ### `binary_to_addresses(List) -> any()` Parse a list from a binary. First attempts to parse the binary as a structured-fields list, and if that fails, it attempts to parse the list as a comma-separated value, stripping quotes and whitespace. ### count/2 ### `count(Item, List) -> any()` ### debug_fmt/1 ### `debug_fmt(X) -> any()` Convert a term to a string for debugging print purposes. ### debug_fmt/2 ### `debug_fmt(X, Indent) -> any()` ### debug_print/4 ### `debug_print(X, Mod, Func, LineNum) -> any()` Print a message to the standard error stream, prefixed by the amount of time that has elapsed since the last call to this function. ### decode/1 ### `decode(Input) -> any()` Try to decode a URL safe base64 into a binary or throw an error when invalid. ### deep_merge/3 ### `deep_merge(Map1, Map2, Opts) -> any()` Deep merge two maps, recursively merging nested maps. ### do_debug_fmt/2 * ### `do_debug_fmt(Wallet, Indent) -> any()` ### do_to_lines/1 * ### `do_to_lines(In) -> any()` ### encode/1 ### `encode(Bin) -> any()` Encode a binary to URL safe base64 binary string. ### eunit_print/2 ### `eunit_print(FmtStr, FmtArgs) -> any()` Format and print an indented string to standard error. ### find_value/2 ### `find_value(Key, List) -> any()` Find the value associated with a key in parsed a JSON structure list. ### find_value/3 ### `find_value(Key, Map, Default) -> any()` ### find_value/4 * ### `find_value(Key, Map, Default, Opts) -> any()` ### float/1 ### `float(Str) -> any()` Coerce a string to a float. ### format_address/2 * ### `format_address(Wallet, Indent) -> any()` If the user attempts to print a wallet, format it as an address. ### format_binary/1 ### `format_binary(Bin) -> any()` Format a binary as a short string suitable for printing. ### format_debug_trace/3 * ### `format_debug_trace(Mod, Func, Line) -> any()` Generate the appropriate level of trace for a given call. ### format_indented/2 ### `format_indented(Str, Indent) -> any()` Format a string with an indentation level. ### format_indented/3 ### `format_indented(RawStr, Fmt, Ind) -> any()` ### format_maybe_multiline/2 ### `format_maybe_multiline(X, Indent) -> any()` Format a map as either a single line or a multi-line string depending on the value of the `debug_print_map_line_threshold` runtime option. ### format_trace/1 ### `format_trace(Stack) -> any()` Format a stack trace as a list of strings, one for each stack frame. Each stack frame is formatted if it matches the `stack_print_prefixes` option. At the first frame that does not match a prefix in the `stack_print_prefixes` option, the rest of the stack is not formatted. ### format_trace/2 * ### `format_trace(Rest, Prefixes) -> any()` ### format_trace_short/1 ### `format_trace_short(Trace) -> any()` Format a trace to a short string. ### format_trace_short/4 * ### `format_trace_short(Max, Latch, Trace, Prefixes) -> any()` ### format_tuple/2 * ### `format_tuple(Tuple, Indent) -> any()` Helper function to format tuples with arity greater than 2. ### get_trace/0 * ### `get_trace() -> any()` Get the trace of the current process. ### hd/1 ### `hd(Message) -> any()` Get the first element (the lowest integer key >= 1) of a numbered map. Optionally, it takes a specifier of whether to return the key or the value, as well as a standard map of HyperBEAM runtime options. ### hd/2 ### `hd(Message, ReturnType) -> any()` ### hd/3 ### `hd(Message, ReturnType, Opts) -> any()` ### hd/5 * ### `hd(Map, Rest, Index, ReturnType, Opts) -> any()` ### human_id/1 ### `human_id(Bin) -> any()` Convert a native binary ID to a human readable ID. If the ID is already a human readable ID, it is returned as is. If it is an ethereum address, it is returned as is. ### human_int/1 ### `human_int(Float) -> any()` Add `,` characters to a number every 3 digits to make it human readable. ### id/1 ### `id(Item) -> any()` Return the human-readable form of an ID of a message when given either a message explicitly, raw encoded ID, or an Erlang Arweave `tx` record. ### id/2 ### `id(TX, Type) -> any()` ### int/1 ### `int(Str) -> any()` Coerce a string to an integer. ### is_hb_module/1 ### `is_hb_module(Atom) -> any()` Is the given module part of HyperBEAM? ### is_hb_module/2 ### `is_hb_module(Atom, Prefixes) -> any()` ### is_human_binary/1 * ### `is_human_binary(Bin) -> any()` Determine whether a binary is human-readable. ### is_ordered_list/2 ### `is_ordered_list(Msg, Opts) -> any()` Determine if the message given is an ordered list, starting from 1. ### is_ordered_list/3 * ### `is_ordered_list(N, Msg, Opts) -> any()` ### is_string_list/1 ### `is_string_list(MaybeString) -> any()` Is the given term a string list? ### key_to_atom/2 ### `key_to_atom(Key, Mode) -> any()` Convert keys in a map to atoms, lowering `-` to `_`. ### list/1 ### `list(Value) -> any()` Coerce a value to a list. ### list_intersection/2 * ### `list_intersection(List1, List2) -> any()` Returns the intersection of two lists, with stable ordering. ### list_replace/3 ### `list_replace(List, Key, Value) -> any()` Replace a key in a list with a new value. ### list_to_numbered_message/1 ### `list_to_numbered_message(Msg) -> any()` Convert a list of elements to a map with numbered keys. ### maybe_throw/2 ### `maybe_throw(Val, Opts) -> any()` Throw an exception if the Opts map has an `error_strategy` key with the value `throw`. Otherwise, return the value. ### mean/1 ### `mean(List) -> any()` ### message_to_ordered_list/1 ### `message_to_ordered_list(Message) -> any()` Take a message with numbered keys and convert it to a list of tuples with the associated key as an integer. Optionally, it takes a standard message of HyperBEAM runtime options. ### message_to_ordered_list/2 ### `message_to_ordered_list(Message, Opts) -> any()` ### message_to_ordered_list/4 * ### `message_to_ordered_list(Message, Keys, Key, Opts) -> any()` ### native_id/1 ### `native_id(Bin) -> any()` Convert a human readable ID to a native binary ID. If the ID is already a native binary ID, it is returned as is. ### normalize_trace/1 * ### `normalize_trace(Rest) -> any()` Remove all calls from this module from the top of a trace. ### number/1 ### `number(List) -> any()` Label a list of elements with a number. ### ok/1 ### `ok(Value) -> any()` Unwrap a tuple of the form `{ok, Value}`, or throw/return, depending on the value of the `error_strategy` option. ### ok/2 ### `ok(Other, Opts) -> any()` ### pick_weighted/2 * ### `pick_weighted(Rest, Remaining) -> any()` Pick a random element from a list, weighted by the values in the list. ### print_trace/3 * ### `print_trace(Stack, Label, CallerInfo) -> any()` ### print_trace/4 ### `print_trace(Stack, CallMod, CallFunc, CallLine) -> any()` Print the trace of the current stack, up to the first non-hyperbeam module. Prints each stack frame on a new line, until it finds a frame that does not start with a prefix in the `stack_print_prefixes` hb_opts. Optionally, you may call this function with a custom label and caller info, which will be used instead of the default. ### print_trace_short/4 ### `print_trace_short(Trace, Mod, Func, Line) -> any()` Print a trace to the standard error stream. ### remove_common/2 ### `remove_common(MainStr, SubStr) -> any()` Remove the common prefix from two strings, returning the remainder of the first string. This function also coerces lists to binaries where appropriate, returning the type of the first argument. ### remove_trailing_noise/1 * ### `remove_trailing_noise(Str) -> any()` ### remove_trailing_noise/2 ### `remove_trailing_noise(Str, Noise) -> any()` ### safe_decode/1 ### `safe_decode(E) -> any()` Safely decode a URL safe base64 into a binary returning an ok or error tuple. ### safe_encode/1 ### `safe_encode(Bin) -> any()` Safely encode a binary to URL safe base64. ### server_id/0 * ### `server_id() -> any()` Retreive the server ID of the calling process, if known. ### server_id/1 * ### `server_id(Opts) -> any()` ### short_id/1 ### `short_id(Bin) -> any()` Return a short ID for the different types of IDs used in AO-Core. ### shuffle/1 * ### `shuffle(List) -> any()` Shuffle a list. ### stddev/1 ### `stddev(List) -> any()` ### to_hex/1 ### `to_hex(Bin) -> any()` Convert a binary to a hex string. Do not use this for anything other than generating a lower-case, non-special character id. It should not become part of the core protocol. We use b64u for efficient encoding. ### to_lines/1 * ### `to_lines(Elems) -> any()` ### to_lower/1 ### `to_lower(Str) -> any()` Convert a binary to a lowercase. ### to_sorted_keys/1 ### `to_sorted_keys(Msg) -> any()` Given a map or KVList, return a deterministically ordered list of its keys. ### to_sorted_keys/2 ### `to_sorted_keys(Msg, Opts) -> any()` ### to_sorted_list/1 ### `to_sorted_list(Msg) -> any()` Given a map or KVList, return a deterministically sorted list of its key-value pairs. ### to_sorted_list/2 ### `to_sorted_list(Msg, Opts) -> any()` ### trace_macro_helper/5 ### `trace_macro_helper(Fun, X2, Mod, Func, Line) -> any()` Utility function to help macro `?trace/0` remove the first frame of the stack trace. ### unique/1 ### `unique(List) -> any()` Take a list and return a list of unique elements. The function is order-preserving. ### until/1 ### `until(Condition) -> any()` Utility function to wait for a condition to be true. Optionally, you can pass a function that will be called with the current count of iterations, returning an integer that will be added to the count. Once the condition is true, the function will return the count. ### until/2 ### `until(Condition, Count) -> any()` ### until/3 ### `until(Condition, Fun, Count) -> any()` ### variance/1 ### `variance(List) -> any()` ### weighted_random/1 ### `weighted_random(List) -> any()` Return a random element from a list, weighted by the values in the list. --- END OF FILE: docs/resources/source-code/hb_util.md --- --- START OF FILE: docs/resources/source-code/hb_volume.md --- # [Module hb_volume.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb_volume.erl) ## Function Index ##
change_node_store/2
check_for_device/1
create_actual_partition/2*
create_mount_info/3*
create_partition/2
format_disk/2
get_partition_info/1*
list_partitions/0
mount_disk/4
mount_opened_volume/3*
parse_disk_info/2*
parse_disk_line/2*
parse_disk_model_line/2*
parse_disk_units_line/2*
parse_io_size_line/2*
parse_sector_size_line/2*
process_disk_line/2*
update_store_config/2*
## Function Details ## ### change_node_store/2 ###

change_node_store(StorePath::binary(), CurrentStore::list()) -> {ok, map()} | {error, binary()}

### check_for_device/1 ###

check_for_device(Device::binary()) -> boolean()

### create_actual_partition/2 * ### `create_actual_partition(Device, PartType) -> any()` ### create_mount_info/3 * ### `create_mount_info(Partition, MountPoint, VolumeName) -> any()` ### create_partition/2 ###

create_partition(Device::binary(), PartType::binary()) -> {ok, map()} | {error, binary()}

### format_disk/2 ###

format_disk(Partition::binary(), EncKey::binary()) -> {ok, map()} | {error, binary()}

### get_partition_info/1 * ### `get_partition_info(Device) -> any()` ### list_partitions/0 ###

list_partitions() -> {ok, map()} | {error, binary()}

### mount_disk/4 ###

mount_disk(Partition::binary(), EncKey::binary(), MountPoint::binary(), VolumeName::binary()) -> {ok, map()} | {error, binary()}

### mount_opened_volume/3 * ### `mount_opened_volume(Partition, MountPoint, VolumeName) -> any()` ### parse_disk_info/2 * ### `parse_disk_info(Device, Lines) -> any()` ### parse_disk_line/2 * ### `parse_disk_line(Line, Info) -> any()` ### parse_disk_model_line/2 * ### `parse_disk_model_line(Line, Info) -> any()` ### parse_disk_units_line/2 * ### `parse_disk_units_line(Line, Info) -> any()` ### parse_io_size_line/2 * ### `parse_io_size_line(Line, Info) -> any()` ### parse_sector_size_line/2 * ### `parse_sector_size_line(Line, Info) -> any()` ### process_disk_line/2 * ### `process_disk_line(Line, X2) -> any()` ### update_store_config/2 * ###

update_store_config(StoreConfig::term(), NewPath::binary()) -> term()

--- END OF FILE: docs/resources/source-code/hb_volume.md --- --- START OF FILE: docs/resources/source-code/hb.md --- # [Module hb.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/hb.erl) Hyperbeam is a decentralized node implementing the AO-Core protocol on top of Arweave. ## Description ## This protocol offers a computation layer for executing arbitrary logic on top of the network's data. Arweave is built to offer a robust, permanent storage layer for static data over time. It can be seen as a globally distributed key-value store that allows users to lookup IDs to retrieve data at any point in time: `Arweave(ID) => Message` Hyperbeam adds another layer of functionality on top of Arweave's protocol: Allowing users to store and retrieve not only arbitrary bytes, but also to perform execution of computation upon that data: `Hyperbeam(Message1, Message2) => Message3` When Hyperbeam executes a message, it will return a new message containing the result of that execution, as well as signed commitments of its correctness. If the computation that is executed is deterministic, recipients of the new message are able to verify that the computation was performed correctly. The new message may be stored back to Arweave if desired, forming a permanent, verifiable, and decentralized log of computation. The mechanisms described above form the basis of a decentralized and verifiable compute engine without any relevant protocol-enforced scalability limits. It is an implementation of a global, shared supercomputer. Hyperbeam can be used for an extremely large variety of applications, from serving static Arweave data with signed commitments of correctness, to executing smart contracts that have _built-in_ HTTP APIs. The Hyperbeam node implementation implements AO, an Actor-Oriented process-based environment for orchestrating computation over Arweave messages in order to facilitate the execution of more traditional, consensus-based smart contracts. The core abstractions of the Hyperbeam node are broadly as follows: 1. The `hb` and `hb_opts` modules manage the node's configuration, environment variables, and debugging tools. 2. The `hb_http` and `hb_http_server` modules manage all HTTP-related functionality. `hb_http_server` handles turning received HTTP requests into messages and applying those messages with the appropriate devices. `hb_http` handles making requests and responding with messages. `cowboy` is used to implement the underlying HTTP server. 3. `hb_ao` implements the computation logic of the node: A mechanism for resolving messages to other messages, via the application of logic implemented in `devices`. `hb_ao` also manages the loading of Erlang modules for each device into the node's environment. There are many different default devices implemented in the hyperbeam node, using the namespace `dev_*`. Some of the critical components are: - `dev_message`: The default handler for all messages that do not specify their own device. The message device is also used to resolve keys that are not implemented by the device specified in a message, unless otherwise signalled. - `dev_stack`: The device responsible for creating and executing stacks of other devices on messages that request it. There are many uses for this device, one of which is the resolution of AO processes. - `dev_p4`: The device responsible for managing payments for the services provided by the node. 4. `hb_store`, `hb_cache` and the store implementations forms a layered system for managing the node's access to persistent storage. `hb_cache` is used as a resolution mechanism for reading and writing messages, while `hb_store` provides an abstraction over the underlying persistent key-value byte storage mechanisms. Example `hb_store` mechanisms can be found in `hb_store_fs` and `hb_store_remote_node`. 5. `ar_*` modules implement functionality related to the base-layer Arweave protocol and are largely unchanged from their counterparts in the Arweave node codebase presently maintained by the Digital History Association (@dha-team/Arweave). You can find documentation of a similar form to this note in each of the core modules of the hyperbeam node. ## Function Index ##
address/0Get the address of a wallet.
address/1*
benchmark/2Run a function as many times as possible in a given amount of time.
benchmark/3Run multiple instances of a function in parallel for a given amount of time.
build/0Utility function to hot-recompile and load the hyperbeam environment.
debug_wait/4Utility function to wait for a given amount of time, printing a debug message to the console first.
deploy_scripts/0Upload all scripts from the scripts directory to the node to Arweave, printing their IDs.
deploy_scripts/1*
do_start_simple_pay/1*
init/0Initialize system-wide settings for the hyperbeam node.
no_prod/3Utility function to throw an error if the current mode is prod and non-prod ready code is being executed.
now/0Utility function to get the current time in milliseconds.
profile/1Utility function to start a profiling session and run a function, then analyze the results.
read/1Debugging function to read a message from the cache.
read/2
start_mainnet/0Start a mainnet server without payments.
start_mainnet/1
start_simple_pay/0Start a server with a simple-pay@1.0 pre-processor.
start_simple_pay/1
start_simple_pay/2
topup/3Helper for topping up a user's balance on a simple-pay node.
topup/4
wallet/0
wallet/1
wallet/2*
## Function Details ## ### address/0 ### `address() -> any()` Get the address of a wallet. Defaults to the address of the wallet specified by the `priv_key_location` configuration key. It can also take a wallet tuple as an argument. ### address/1 * ### `address(Wallet) -> any()` ### benchmark/2 ### `benchmark(Fun, TLen) -> any()` Run a function as many times as possible in a given amount of time. ### benchmark/3 ### `benchmark(Fun, TLen, Procs) -> any()` Run multiple instances of a function in parallel for a given amount of time. ### build/0 ### `build() -> any()` Utility function to hot-recompile and load the hyperbeam environment. ### debug_wait/4 ### `debug_wait(T, Mod, Func, Line) -> any()` Utility function to wait for a given amount of time, printing a debug message to the console first. ### deploy_scripts/0 ### `deploy_scripts() -> any()` Upload all scripts from the `scripts` directory to the node to Arweave, printing their IDs. ### deploy_scripts/1 * ### `deploy_scripts(Dir) -> any()` ### do_start_simple_pay/1 * ### `do_start_simple_pay(Opts) -> any()` ### init/0 ### `init() -> any()` Initialize system-wide settings for the hyperbeam node. ### no_prod/3 ### `no_prod(X, Mod, Line) -> any()` Utility function to throw an error if the current mode is prod and non-prod ready code is being executed. You can find these in the codebase by looking for ?NO_PROD calls. ### now/0 ### `now() -> any()` Utility function to get the current time in milliseconds. ### profile/1 ### `profile(Fun) -> any()` Utility function to start a profiling session and run a function, then analyze the results. Obviously -- do not use in production. ### read/1 ### `read(ID) -> any()` Debugging function to read a message from the cache. Specify either a scope atom (local or remote) or a store tuple as the second argument. ### read/2 ### `read(ID, ScopeAtom) -> any()` ### start_mainnet/0 ### `start_mainnet() -> any()` Start a mainnet server without payments. ### start_mainnet/1 ### `start_mainnet(Port) -> any()` ### start_simple_pay/0 ### `start_simple_pay() -> any()` Start a server with a `simple-pay@1.0` pre-processor. ### start_simple_pay/1 ### `start_simple_pay(Addr) -> any()` ### start_simple_pay/2 ### `start_simple_pay(Addr, Port) -> any()` ### topup/3 ### `topup(Node, Amount, Recipient) -> any()` Helper for topping up a user's balance on a simple-pay node. ### topup/4 ### `topup(Node, Amount, Recipient, Wallet) -> any()` ### wallet/0 ### `wallet() -> any()` ### wallet/1 ### `wallet(Location) -> any()` ### wallet/2 * ### `wallet(Location, Opts) -> any()` --- END OF FILE: docs/resources/source-code/hb.md --- --- START OF FILE: docs/resources/source-code/index.md --- # Source Code Documentation Welcome to the source code documentation for HyperBEAM. This section provides detailed insights into the codebase, helping developers understand the structure, functionality, and implementation details of HyperBEAM and its components. ## Overview HyperBEAM is built with a modular architecture to ensure scalability, maintainability, and extensibility. The source code is organized into distinct components, each serving a specific purpose within the ecosystem. ## Sections - **HyperBEAM Core**: The main framework that orchestrates data processing, storage, and routing. - **Compute Unit**: Handles computational tasks and integrates with the HyperBEAM core for distributed processing. - **Trusted Execution Environment (TEE)**: Ensures secure execution of sensitive operations. - **Client Libraries**: Tools and SDKs for interacting with HyperBEAM, including the JavaScript client. ## Getting Started To explore the source code, you can clone the repository from [GitHub](https://github.com/permaweb/HyperBEAM). ## Navigation Use the navigation menu to dive into specific parts of the codebase. Each module includes detailed documentation, code comments, and examples to assist in understanding and contributing to the project. --- END OF FILE: docs/resources/source-code/index.md --- --- START OF FILE: docs/resources/source-code/README.md --- # The hb application # ## Modules ##
ar_bundles
ar_deep_hash
ar_rate_limiter
ar_timestamp
ar_tx
ar_wallet
dev_apply
dev_cache
dev_cacheviz
dev_codec_ans104
dev_codec_flat
dev_codec_httpsig
dev_codec_httpsig_conv
dev_codec_httpsig_siginfo
dev_codec_json
dev_codec_structured
dev_cron
dev_cu
dev_dedup
dev_delegated_compute
dev_faff
dev_genesis_wasm
dev_green_zone
dev_hook
dev_hyperbuddy
dev_json_iface
dev_local_name
dev_lookup
dev_lua
dev_lua_lib
dev_lua_test
dev_lua_test_ledgers
dev_manifest
dev_message
dev_meta
dev_monitor
dev_multipass
dev_name
dev_node_process
dev_p4
dev_patch
dev_poda
dev_process
dev_process_cache
dev_process_worker
dev_push
dev_relay
dev_router
dev_scheduler
dev_scheduler_cache
dev_scheduler_formats
dev_scheduler_registry
dev_scheduler_server
dev_simple_pay
dev_snp
dev_snp_nif
dev_stack
dev_test
dev_volume
dev_wasi
dev_wasm
hb
hb_ao
hb_ao_test_vectors
hb_app
hb_beamr
hb_beamr_io
hb_cache
hb_cache_control
hb_cache_render
hb_client
hb_crypto
hb_debugger
hb_escape
hb_event
hb_examples
hb_features
hb_gateway_client
hb_http
hb_http_benchmark_tests
hb_http_client
hb_http_client_sup
hb_http_server
hb_json
hb_keccak
hb_link
hb_logger
hb_maps
hb_message
hb_message_test_vectors
hb_metrics_collector
hb_name
hb_opts
hb_path
hb_persistent
hb_private
hb_process_monitor
hb_router
hb_singleton
hb_store
hb_store_fs
hb_store_gateway
hb_store_lmdb
hb_store_lru
hb_store_remote_node
hb_store_rocksdb
hb_structured_fields
hb_sup
hb_test_utils
hb_tracer
hb_util
hb_volume
rsa_pss
--- END OF FILE: docs/resources/source-code/README.md --- --- START OF FILE: docs/resources/source-code/rsa_pss.md --- # [Module rsa_pss.erl](https://github.com/permaweb/HyperBEAM/blob/main/src/rsa_pss.erl) Distributed under the Mozilla Public License v2.0. Copyright (c) 2014-2015, Andrew Bennett __Authors:__ Andrew Bennett ([`andrew@pixid.com`](mailto:andrew@pixid.com)). ## Description ## Original available at: https://github.com/potatosalad/erlang-crypto_rsassa_pss ## Data Types ## ### rsa_digest_type() ###

rsa_digest_type() = md5 | sha | sha224 | sha256 | sha384 | sha512
### rsa_private_key() ###

rsa_private_key() = #RSAPrivateKey{}
### rsa_public_key() ###

rsa_public_key() = #RSAPublicKey{}
## Function Index ##
dp/2*
ep/2*
int_to_bit_size/1*
int_to_bit_size/2*
int_to_byte_size/1*
int_to_byte_size/2*
mgf1/3*
mgf1/5*
normalize_to_key_size/2*
pad_to_key_size/2*
sign/3
sign/4
verify/4
verify_legacy/4
## Function Details ## ### dp/2 * ### `dp(B, X2) -> any()` ### ep/2 * ### `ep(B, X2) -> any()` ### int_to_bit_size/1 * ### `int_to_bit_size(I) -> any()` ### int_to_bit_size/2 * ### `int_to_bit_size(I, B) -> any()` ### int_to_byte_size/1 * ### `int_to_byte_size(I) -> any()` ### int_to_byte_size/2 * ### `int_to_byte_size(I, B) -> any()` ### mgf1/3 * ### `mgf1(DigestType, Seed, Len) -> any()` ### mgf1/5 * ### `mgf1(DigestType, Seed, Len, T, Counter) -> any()` ### normalize_to_key_size/2 * ### `normalize_to_key_size(Bits, A) -> any()` ### pad_to_key_size/2 * ### `pad_to_key_size(Bytes, Data) -> any()` ### sign/3 ###

sign(Message, DigestType, PrivateKey) -> Signature
### sign/4 ###

sign(Message, DigestType, Salt, PrivateKey) -> Signature
### verify/4 ###

verify(Message, DigestType, Signature, PublicKey) -> boolean()
### verify_legacy/4 ### `verify_legacy(Message, DigestType, Signature, PublicKey) -> any()` --- END OF FILE: docs/resources/source-code/rsa_pss.md --- --- START OF FILE: docs/run/configuring-your-machine.md --- # Configuring Your HyperBEAM Node This guide details the various ways to configure your HyperBEAM node's behavior, including ports, storage, keys, and logging. ## Configuration (`config.flat`) The primary way to configure your HyperBEAM node is through a `config.flat` file located in the node's working directory or specified by the `HB_CONFIG_LOCATION` environment variable. This file uses a simple `Key = Value.` format (note the period at the end of each line). **Example `config.flat`:** ```erlang % Set the HTTP port port = 8080. % Specify the Arweave key file priv_key_location = "/path/to/your/wallet.json". % Set the data store directory % Note: Storage configuration can be complex. See below. % store = [{local, [{root, <<"./node_data_mainnet">>}]}]. % Example of complex config, not for config.flat % Enable verbose logging for specific modules % debug_print = [hb_http, dev_router]. % Example of complex config, not for config.flat ``` Below is a reference of commonly used configuration keys. Remember that `config.flat` only supports simple key-value pairs (Atoms, Strings, Integers, Booleans). For complex configurations (Lists, Maps), you must use environment variables or `hb:start_mainnet/1`. ### Core Configuration These options control fundamental HyperBEAM behavior. | Option | Type | Default | Description | |--------|------|---------|-------------| | `port` | Integer | 8734 | HTTP API port | | `hb_config_location` | String | "config.flat" | Path to configuration file | | `priv_key_location` | String | "hyperbeam-key.json" | Path to operator wallet key file | | `mode` | Atom | debug | Execution mode (debug, prod) | ### Server & Network Configuration These options control networking behavior and HTTP settings. | Option | Type | Default | Description | |--------|------|---------|-------------| | `host` | String | "localhost" | Choice of remote node for non-local tasks | | `gateway` | String | "https://arweave.net" | Default gateway | | `bundler_ans104` | String | "https://up.arweave.net:443" | Location of ANS-104 bundler | | `protocol` | Atom | http2 | Protocol for HTTP requests (http1, http2, http3) | | `http_client` | Atom | gun | HTTP client to use (gun, httpc) | | `http_connect_timeout` | Integer | 5000 | HTTP connection timeout in milliseconds | | `http_keepalive` | Integer | 120000 | HTTP keepalive time in milliseconds | | `http_request_send_timeout` | Integer | 60000 | HTTP request send timeout in milliseconds | | `relay_http_client` | Atom | httpc | HTTP client for the relay device | ### Security & Identity These options control identity and security settings. | Option | Type | Default | Description | |--------|------|---------|-------------| | `scheduler_location_ttl` | Integer | 604800000 | TTL for scheduler registration (7 days in ms) | ### Caching & Storage These options control caching behavior. **Note:** Detailed storage configuration (`store` option) involves complex data structures and cannot be set via `config.flat`. | Option | Type | Default | Description | |--------|------|---------|-------------| | `cache_lookup_heuristics` | Boolean | false | Whether to use caching heuristics or always consult the local data store | | `access_remote_cache_for_client` | Boolean | false | Whether to access data from remote caches for client requests | | `store_all_signed` | Boolean | true | Whether the node should store all signed messages | | `await_inprogress` | Atom/Boolean | named | Whether to await in-progress executions (false, named, true) | ### Execution & Processing These options control how HyperBEAM executes messages and processes. | Option | Type | Default | Description | |--------|------|---------|-------------| | `scheduling_mode` | Atom | local_confirmation | When to inform recipients about scheduled assignments (aggressive, local_confirmation, remote_confirmation) | | `compute_mode` | Atom | lazy | Whether to execute more messages after returning a result (aggressive, lazy) | | `process_workers` | Boolean | true | Whether the node should use persistent processes | | `client_error_strategy` | Atom | throw | What to do if a client error occurs | | `wasm_allow_aot` | Boolean | false | Allow ahead-of-time compilation for WASM | ### Device Management These options control how HyperBEAM manages devices. | Option | Type | Default | Description | |--------|------|---------|-------------| | `load_remote_devices` | Boolean | false | Whether to load devices from remote signers | ### Debug & Development These options control debugging and development features. | Option | Type | Default | Description | |--------|------|---------|-------------| | `debug_stack_depth` | Integer | 40 | Maximum stack depth for debug printing | | `debug_print_map_line_threshold` | Integer | 30 | Maximum lines for map printing | | `debug_print_binary_max` | Integer | 60 | Maximum binary size for debug printing | | `debug_print_indent` | Integer | 2 | Indentation for debug printing | | `debug_print_trace` | Atom | short | Trace mode (short, false) | | `short_trace_len` | Integer | 5 | Length of short traces | | `debug_hide_metadata` | Boolean | true | Whether to hide metadata in debug output | | `debug_ids` | Boolean | false | Whether to print IDs in debug output | | `debug_hide_priv` | Boolean | true | Whether to hide private data in debug output | **Note:** For the *absolute complete* and most up-to-date list, including complex options not suitable for `config.flat`, refer to the `default_message/0` function in the `hb_opts` module source code. ## Overrides (Environment Variables & Args) You can override settings from `config.flat` or provide values if the file is missing using environment variables or command-line arguments. **Using Environment Variables:** Environment variables typically use an `HB_` prefix followed by the configuration key in uppercase. * **`HB_PORT=`:** Overrides `hb_port`. * Example: `HB_PORT=8080 rebar3 shell` * **`HB_KEY=`:** Overrides `hb_key`. * Example: `HB_KEY=~/.keys/arweave_key.json rebar3 shell` * **`HB_STORE=`:** Overrides `hb_store`. * Example: `HB_STORE=./node_data_1 rebar3 shell` * **`HB_PRINT=`:** Overrides `hb_print`. `` can be `true` (or `1`), or a comma-separated list of modules/topics (e.g., `hb_path,hb_ao,ao_result`). * Example: `HB_PRINT=hb_http,dev_router rebar3 shell` * **`HB_CONFIG_LOCATION=`:** Specifies a custom location for the configuration file. **Using `erl_opts` (Direct Erlang VM Arguments):** You can also pass arguments directly to the Erlang VM using the `- ` format within `erl_opts`. This is generally less common for application configuration than `config.flat` or environment variables. ```bash rebar3 shell --erl_opts "-hb_port 8080 -hb_key path/to/key.json" ``` **Order of Precedence:** 1. Command-line arguments (`erl_opts`). 2. Settings in `config.flat`. 3. Environment variables (`HB_*`). 4. Default values from `hb_opts.erl`. ## Configuration in Releases When running a release build (see [Running a HyperBEAM Node](./running-a-hyperbeam-node.md)), configuration works similarly: 1. A `config.flat` file will be present in the release directory (e.g., `_build/default/rel/hb/config.flat`). Edit this file to set your desired parameters for the release environment. 2. Environment variables (`HB_*`) can still be used to override the settings in the release's `config.flat` when starting the node using the `bin/hb` script. --- END OF FILE: docs/run/configuring-your-machine.md --- --- START OF FILE: docs/run/joining-running-a-router.md --- # Joining or Running a Router Node Router nodes play a crucial role in the HyperBEAM network by directing incoming HTTP requests to appropriate worker nodes capable of handling the requested computation or data retrieval. They act as intelligent load balancers and entry points into the AO ecosystem. !!! info "Advanced Topic" Configuring and running a production-grade router involves considerations beyond the scope of this introductory guide, including network topology, security, high availability, and performance tuning. ## What is a Router? In HyperBEAM, the `dev_router` module (and associated logic) implements routing functionality. A node configured as a router typically: 1. Receives external HTTP requests (HyperPATH calls). 2. Parses the request path to determine the target process, device, and desired operation. 3. Consults its routing table or logic to select an appropriate downstream worker node (which might be itself or another node). 4. Forwards the request to the selected worker. 5. Receives the response from the worker. 6. Returns the response to the original client. Routers often maintain information about the capabilities and load of worker nodes they know about. !!! note "Using Routers as a Client" To use a router as a client, simply make HyperPATH requests to the router's URL: `https://dev-router.forward.computer/~/...`. The router will automatically route your request to an appropriate worker node. ## Node Registration Process *Coming soon...* ## Further Exploration * Examine the `dev_router.erl` source code for detailed implementation. * Review the `scripts/dynamic-router.lua` for router-side logic. * Review the available configuration options in `hb_opts.erl` related to routing (`routes`, strategies, etc.). * Consult community channels for best practices on deploying production routers. --> --- END OF FILE: docs/run/joining-running-a-router.md --- --- START OF FILE: docs/run/running-a-hyperbeam-node.md --- # Running a HyperBEAM Node This guide provides the basics for running your own HyperBEAM node, installing dependencies, and connecting to the AO network. ## System Dependencies To successfully build and run a HyperBEAM node, your system needs several software dependencies installed. === "macOS" Install core dependencies using [Homebrew](https://brew.sh/): ```bash brew install cmake git pkg-config openssl ncurses ``` === "Linux (Debian/Ubuntu)" Install core dependencies using `apt`: ```bash sudo apt-get update && sudo apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ pkg-config \ ncurses-dev \ libssl-dev \ sudo \ curl \ ca-certificates ``` === "Windows (WSL)" Using the Windows Subsystem for Linux (WSL) with a distribution like Ubuntu is recommended. Follow the Linux (Debian/Ubuntu) instructions within your WSL environment. ### Erlang/OTP HyperBEAM is built on Erlang/OTP. You need a compatible version installed (check the `rebar.config` or project documentation for specific version requirements, **typically OTP 27**). Installation methods: === "macOS (brew)" ```bash brew install erlang ``` === "Linux (apt)" ```bash sudo apt install erlang ``` === "Source Build" Download from [erlang.org](https://www.erlang.org/downloads) and follow the build instructions for your platform. ### Rebar3 Rebar3 is the build tool for Erlang projects. Installation methods: === "macOS (brew)" ```bash brew install rebar3 ``` === "Linux / macOS (Direct Download)" Get the `rebar3` binary from the [official website](https://rebar3.org/). Place the downloaded `rebar3` file in your system's `PATH` (e.g., `/usr/local/bin`) and make it executable (`chmod +x rebar3`). ### Node.js Node.js might be required for certain JavaScript-related tools or dependencies. Installation methods: === "macOS (brew)" ```bash brew install node ``` === "Linux (apt)" ```bash # Check your distribution's recommended method, might need nodesource repo sudo apt install nodejs npm ``` === "asdf (Recommended)" `asdf-vm` with the `asdf-nodejs` plugin is recommended. ```bash asdf plugin add nodejs https://github.com/asdf-vm/asdf-nodejs.git asdf install nodejs # e.g., lts asdf global nodejs ``` ### Rust Rust is needed if you intend to work with or build components involving WebAssembly (WASM) or certain Native Implemented Functions (NIFs) used by some devices (like `~snp@1.0`). The recommended way to install Rust on **all platforms** is via `rustup`: ```bash curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source "$HOME/.cargo/env" # Or follow the instructions provided by rustup ``` ## Prerequisites for Running Before starting a node, ensure you have: * Installed the [system dependencies](#system-dependencies) mentioned above. * Cloned the [HyperBEAM repository](https://github.com/permaweb/HyperBEAM) (`git clone ...`). * Compiled the source code (`rebar3 compile` in the repo directory). * An Arweave **wallet keyfile** (e.g., generated via [Wander](https://www.wander.app)). The path to this file is typically set via the `hb_key` configuration option (see [Configuring Your HyperBEAM Node](./configuring-your-machine.md)). ## Starting a Basic Node The simplest way to start a HyperBEAM node for development or testing is using `rebar3` from the repository's root directory: ```bash rebar3 shell ``` This command: 1. Starts the Erlang Virtual Machine (BEAM) with all HyperBEAM modules loaded. 2. Initializes the node with default settings (from `hb_opts.erl`). 3. Starts the default HTTP server (typically on **port 10000**), making the node accessible via HyperPATHs. 4. Drops you into an interactive Erlang shell where you can interact with the running node. This basic setup is suitable for local development and exploring HyperBEAM's functionalities. ## Optional Build Profiles HyperBEAM uses build profiles to enable optional features, often requiring extra dependencies. To run a node with specific profiles enabled, use `rebar3 as ... shell`: **Available Profiles (Examples):** * `genesis_wasm`: Enables Genesis WebAssembly support. * `rocksdb`: Enables the RocksDB storage backend. * `http3`: Enables HTTP/3 support. **Example Usage:** ```bash # Start with RocksDB profile rebar3 as rocksdb shell # Start with RocksDB and Genesis WASM profiles rebar3 as rocksdb, genesis_wasm shell ``` *Note: Choose profiles **before** starting the shell, as they affect compile-time options.* ## Node Configuration HyperBEAM offers various configuration options (port, key file, data storage, logging, etc.). These are primarily set using a `config.flat` file and can be overridden by environment variables or command-line arguments. See the dedicated [Configuring Your HyperBEAM Node](./configuring-your-machine.md) guide for detailed information on all configuration methods and options. ## Verify Installation To quickly check if your node is running and accessible, you can send a request to its `~meta@1.0` device (assuming default port 10000): ```bash curl http://localhost:8734/~meta@1.0/info ``` A JSON response containing node information indicates success. ## Running for Production (Mainnet) While you can connect to the main AO network using the `rebar3 shell` for testing purposes (potentially using specific configurations or helper functions like `hb:start_mainnet/1` if available and applicable), the standard and recommended method for a stable production deployment (like running on the mainnet) is to build and run a **release**. **1. Build the Release:** From the root of the HyperBEAM repository, build the release package. You might include specific profiles needed for your mainnet setup (e.g., `rocksdb` if you intend to use it): ```bash # Build release with default profile rebar3 release # Or, build with specific profiles (example) # rebar3 as rocksdb release ``` This command compiles the project and packages it along with the Erlang Runtime System (ERTS) and all dependencies into a directory, typically `_build/default/rel/hb`. **2. Configure the Release:** Navigate into the release directory (e.g., `cd _build/default/rel/hb`). Ensure you have a correctly configured `config.flat` file here. See the [configuration guide](./configuring-your-machine.md) for details on setting mainnet parameters (port, key file location, store path, specific peers, etc.). Environment variables can also be used to override settings in the release's `config.flat` when starting the node. **3. Start the Node:** Use the generated start script (`bin/hb`) to run the node: ```bash # Start the node in the foreground (logs to console) ./bin/hb console # Start the node as a background daemon ./bin/hb start # Check the status ./bin/hb ping ./bin/hb status # Stop the node ./bin/hb stop ``` Consult the generated `bin/hb` script or Erlang/OTP documentation for more advanced start-up options (e.g., attaching a remote shell). Running as a release provides a more robust, isolated, and manageable way to operate a node compared to running directly from the `rebar3 shell`. ## Stopping the Node (rebar3 shell) To stop the node running *within the `rebar3 shell`*, press `Ctrl+C` twice or use the Erlang command `q().`. ## Next Steps * **Configure Your Node:** Deep dive into [configuration options](./configuring-your-machine.md). * **TEE Nodes:** Learn about running nodes in [Trusted Execution Environments](./tee-nodes.md) for enhanced security. * **Routers:** Understand how to configure and run a [router node](./joining-running-a-router.md). --- END OF FILE: docs/run/running-a-hyperbeam-node.md --- --- START OF FILE: docs/run/tee-nodes.md --- # Trusted Execution Environment (TEE) !!! info "Documentation Coming Soon" Detailed documentation about Trusted Execution Environment support in HyperBEAM is currently being developed and will be available soon. ## Overview HyperBEAM supports Trusted Execution Environments (TEEs) through the `~snp@1.0` device, which enables secure, trust-minimized computation on remote machines. TEEs provide hardware-level isolation and attestation capabilities that allow users to verify that their code is running in a protected environment, exactly as intended, even on untrusted hardware. The `~snp@1.0` device in HyperBEAM is used to generate and validate proofs that a node is executing inside a Trusted Execution Environment. Nodes executing inside these environments use an ephemeral key pair that provably only exists inside the TEE, and can sign attestations of AO-Core executions in a trust-minimized way. ## Key Features - Hardware-level isolation for secure computation - Remote attestation capabilities - Protected execution state - Confidential computing support - Compatibility with AMD SEV-SNP technology ## Coming Soon Detailed documentation on the following topics will be added: - TEE setup and configuration - Using the `~snp@1.0` device - Verifying TEE attestations - Developing for TEEs - Security considerations - Performance characteristics If you intend to offer TEE-based computation of AO-Core devices, please see the [HyperBEAM OS repository](https://github.com/permaweb/hb-os) for preliminary details on configuration and deployment. --- END OF FILE: docs/run/tee-nodes.md ---