After creating a model in Tensorflow.js I came across a problem with sharing it in projects. Quite a small pokemon recognition model from the photos weighs 12 MB and that's after quantization and transformation to graph model. Adding a model in Storage Firebase could quickly end up with a huge transfer bill if the model became popular. Fortunately in the world of better and better browsers there is already a possibility to share files on the same principle as torrent (the only requirement is to have at least one active device with our files). The p2p network, where everyone shares your file, and the best part of it is already easy to use a library called IPFS.
Additionally, it has a very cool browser-based file management application.
Ok, we already have a model and a place where we can store it for a small cost, but how to force tensorflow to download files from IPFS?
The basic model loader uses the Fetch API to download the necessary files and best of all, you can replace the fetchFunc
fetch
fetch
./get-blob-url.ts
import * as all from 'it-all';
import * as last from 'it-last';
export const getBlobUrl = async (node: any, hash: string) => {
const file = await last(node.get(hash));
const content = await all(file.content);
const blob = new Blob(content);
const url = URL.createObjectURL(blob);
return url;
};
./main.ts
import * as IPFS from 'ipfs';
import { getBlobUrl } from './get-blob-url';
const run = async () => {
const ipfsPath = '/ipfs/QmSECF4CLdXkh2iNQWNJYYdrTzinqHPNjST1BUjEjq4X5q/quantized-graph-pokemon-model/model.json'
const node = await IPFS.create();
const model = await loadGraphModel(ipfsPath, {
fetchFunc: async (url: string) => {
const blobUrl = await getBlobUrl(node, url);
return fetch(blobUrl);
}
})
}
run();
In this way, the loader model will correctly solve the relative addresses for the download of the weights for our model.
In this example the program downloads the model for pokemon detection in the image which I use in