standalone hyperdimensional computing library in c99. zero dependencies, runs anywhere.
float vectors and binary (bit-packed) vectors. fft-based circular convolution. drop-in ready.
hdc encodes data into high-dimensional vectors and classifies by similarity. no training loop, no backpropagation, no gpu. works on microcontrollers, laptops, bare metal, whatever.
benchmarked on UCI datasets:
| dataset | features | classes | accuracy | notes |
|---|---|---|---|---|
| wine | 13 | 3 | 97.0% avg (200 seeds) | 100% on ~1 in 3 seeds |
| ionosphere | 34 | 2 | 90.3% avg (200 seeds) | beats raw kNN by 6% |
| iris | 4 | 3 | 96.7% avg | matches SVM |
- matches torchhd accuracy, runs 16x faster
- 25,000x lighter (15KB vs 500MB+)
- works at 64 dimensions just as well as 4096
- first known HDC implementation tested on quantum hardware (IonQ)
primitives
bind— element-wise multiply, encodes relationships, reversiblebundle— element-wise add, combines vectors like a votepermute— circular shift, encodes position and ordernormalize— scale to unit length for fair comparisonssimilize— cosine similarity between two vectorsrandom_bipolar— generate random vectors of -1 and 1circular_convolve— fft-based circular convolution (hrr-style binding)
encoding
level_encode— continuous value (0.0-1.0) to vector, randomized flip order for zero biasid_level_encode— multi-channel sensor data to one vector with channel identityngram— sequence fingerprinting for pattern and order capture
classification
train— add examples to class prototypesclassify— find the most similar class, returns -1 if nothing trainedhdc_classifier_init— initialize classifier with dimension
fft
fft/inverse_fft— fast fourier transform on complex arrayscircular_convolve— convolve two vectors via fft (captures cross-feature relationships)complex_multiply— complex number multiplicationvector_to_complex/complex_to_vector— conversion helpers
helpers — zero_vector, neg_vector, copy_vector, shuffle, check_null, check_dimension
bit-packed vectors in uint64_t arrays. 64 dimensions per word. way faster, way less memory.
random_binary— generate random bit vectorsbind_binary— xor binding (single cpu instruction per 64 dims)bundle_binary— majority vote across multiple vectorssimilize_binary— hamming distance via popcountpermute_binary— bit-level circular shiftlevel_encode_binary— continuous value to binary vectorid_level_encode_binary— multi-channel sensor encodingtrain_binary/classify_binary— accumulator-based classifier with majority vote thresholdingbuild_prototypes_binary— threshold accumulators into binary prototypes
#include "hdc.h"
#define DIM 4096
int main(void)
{
hdc_init(42); // always call this first
float a[DIM], b[DIM], result[DIM];
random_bipolar(a, DIM);
random_bipolar(b, DIM);
bind(result, a, b, DIM);
float sim;
similize(&sim, a, b, DIM);
// sim is near 0.0 — random vectors are nearly orthogonal
}gcc -std=c99 -I. -o app your_file.c hdc.c -lm
for binary hdc:
gcc -std=c99 -I. -o app your_file.c hdc_binary.c -lm
gcc -std=c99 -O2 -I. -o wine_benchmark examples/wine_benchmark.c hdc.c -lm
./wine_benchmark
- call
hdc_init()before anything else. level_encode uses a randomized internal table that gets built during init. skip it and you get biased encoding with no error. - classifier structs are large. float is ~5MB, binary is ~5MB (accumulators). declare them
staticor global, never as a local variable inside a function.static struct hdc_classifier clf; hdc_classifier_init(&clf, 4096);
- all functions do NULL and bounds checking. you'll get a printed warning instead of a segfault if you pass bad pointers or invalid dimensions.
- max dimension is 10048. configurable via
HDC_MAX_DIMENSIONin the header. - circular_convolve requires power-of-2 dimensions (512, 1024, 2048, 4096, etc) for the fft.
- binary dimensions must be multiples of 64 since vectors are packed into
uint64_twords.
- gesture recognition demo on pico 2w + mpu6050
- text/language classification via ngram encoding
- simd acceleration (sse2/avx2)
- fpga hdc accelerator prototype
mit