Show HN: Gemini can now natively embed video, so I built sub-second video search
415 points by sohamrj 2 days ago | 102 comments
Gemini Embedding 2 can project raw video directly into a 768-dimensional vector space alongside text. No transcription, no frame captioning, no intermediate text. A query like "green car cutting me off" is directly comparable to a 30-second video clip at the vector level.

I used this to build a CLI that indexes hours of footage into ChromaDB, then searches it with natural language and auto-trims the matching clip. Demo video on the GitHub README. Indexing costs ~$2.50/hr of footage. Still-frame detection skips idle chunks, so security camera / sentry mode footage is much cheaper.


macNchz 2 days ago
This is a really cool implementation—embeddings still often feel like magic to me. That said, this exact use case is sort of also my biggest point of concern with where AI takes us, much more so than most of the common AI risks you hear lots of chatter about. We live in a world absolutely loaded with cameras now but ultimately retain some semblance of semi-anonymity/privacy in public by virtue of the fact that nobody can actually watch or review all of the video from those cameras except when there is a compelling reason to do so, but these technologies are making that a much more realistic proposition.

The presence of cameras everywhere is considerably more concerning than the status quo, to me at least, when there is an AI watching and indexing every second of every feed—where camera owners or manufacturers or governments could set simple natural language parameters for highly specific people or activities notify about. There are obviously compelling and easy-to-sell cases here that will surely drive adoption as it becomes cost effective: get an alert to crime in progress, get an alert when a neighbor who doesn't clean up after his dog, get an alert when someone has fallen...but the potential implications of living in a panopticon like this if not well regulated are pretty ugly.

reply
citruscomputing 2 days ago
It's being built as we speak. I attended at a city council meeting yesterday, discussing approving a contract for ALPR cameras. I learned about a product from the camera vendor called Fusus[0], a dashboard that integrates various camera systems, ALPRs, alerts, etc. Two things stood out to me: natural-language querying of video feeds, and future planned integration with civilian-deployed cameras. The city only had budget for 50 ALPRs, and they stressed how they're only deploying them on main streets, but it seems like only a matter of time before your neighbor is able to install a camera that feeds right into the local PD's AI-enabled systems. One council member raised concerns about integrations with the citizen app[1] specifically (and a few others I didn't catch the names of). I'm very worried about where all this is heading.

[0]: https://www.axon.com/products/axon-fusus [1]: https://citizen.com/

reply
robertlagrant 8 hours ago
I live in Oxford, UK and walked past a police van that said "automatic facial recognition in use". Not exactly a good sign without any caveats. I imagine they recorded me staring at their van.
reply
sohamrj 2 days ago
Totally valid concern. Right now the cost ($2.50/hr) and latency make continuous real-time indexing impractical, but that won't always be the case. This is one of the reasons I'd want to see open-weight local models for this, keeps the indexing on your own hardware with no footage leaving your machine. But you're right that the broader trajectory here is worth thinking carefully about.
reply
jimmySixDOF 23 hours ago
How are you getting to $2.50/hr ? The price sheet says its 0.00079 per frame.

https://ai.google.dev/gemini-api/docs/pricing#gemini-embeddi...

reply
jjwiseman 22 hours ago
From what I see the code downsamples video to 5 fps, so 1 hour of video is 3600 seconds * 5 fps = 18,000 frames. 18,000 frames * $0.00079/frame = $14.22. A couple dollars more with the overlap.

(The code also tries to skip "still" frames, but if your video is dynamic you're looking at the cost above.)

reply
sohamrj 20 hours ago
you're right that the code uses ffmpeg to downsample the chunks to 5fps before sending them, but that's only a local/bandwidth optimization, not what the api actually processes.

regardless of the file's frame rate, the gemini api natively extracts and tokenizes exactly 1 fps. the 5 fps downscaling just keeps the payload sizes small so the api requests are fast and don't timeout.

i'll update the readme to make this more clear. thanks for bringing this up.

reply
jjwiseman 17 hours ago
Thanks for the details and correction.
reply
mpalmer 2 days ago
It's 2.50 an hour because Google has margins. A nation state could do it at cost, and even if it's not a huge difference, the price of a year's worth of embeddings is just $21,900. That's a rounding error, especially considering it's a one time cost for footage.
reply
wholinator2 2 days ago
Right? $2.50 an hour is trivial to a Government that can vote to invent a trillion dollars. Even just 1 million dollars is the cost of monitoring 45 real time feeds for a year. I'm sure just many very rich people would pay that for the safety of their compound.
reply
Ajedi32 2 days ago
Most cameras are also not queryable by any one person or organization. They are owned by different companies and if the government wants access they have to subpoena them after the fact.

The problems start cropping up when you get things like Flock where governments start deploying cameras on a massive scale, or Ring where a single company has unrestricted access to everyone's private cameras.

reply
Spivak 2 days ago
I think Flock is just a symptom of the underlying tech becoming so cheap that "just blanket the city in cameras" starts to sound like a viable solution when police rely so heavily on camera footage.

I don't think it's a good thing but it seems the limiting factor has been technological feasibility instead of any kind of principle against it.

reply
cake_robot 2 days ago
Yeah, the panopticon is now technically very feasible it's just expensive to implement (for now).
reply
whattheheckheck 22 hours ago
Its very cheap to target an individual though so they dont need to look everywhere
reply
FuckButtons 22 hours ago
Once the hardware to run inference for something like the vision understanding module of this can be run on a low / medium power asic drones are going to be absolutely horrifying weapons.
reply
greggsy 2 days ago
All the major cloud providers offer some form of face detection and numberplate reading, with many supporting object detection (ie package, vehicle, person) out of the camera itself.
reply
macNchz 23 hours ago
It's definitely creeping into things, though most of the features I've seen are fairly simplistic compared to what would be possible if the video was being reviewed + indexed by current SoTA multimodal LLMs.
reply
zahlman 21 hours ago
> this exact use case is sort of also my biggest point of concern with where AI takes us, much more so than most of the common AI risks you hear lots of chatter about.

I've been hearing warnings that AI would be used for this since well before it seemed feasible.

reply
macNchz 20 hours ago
Not claiming to have hit on something unique here, but I think it’s realistic and often drowned out in favor of sci-fi nonsense.
reply
janalsncm 2 days ago
For specific people they probably wouldn’t use general embeddings. These embeddings can let you search for “tall man in a trenchcoat” but if you want a specific person you would use facial recognition.
reply
hypeatei 2 days ago
I think a general description is better for surveillance/tracking like this, no? If they're at a weird angle or intentionally concealing their face then facial recognition falls apart but being able to describe them naturally would result in better tracking IMO.
reply
macNchz 24 hours ago
Presumably the ideal is some kind of a fusion. Upload or tag some images/videos and link someone's social profiles and the system can look out for them based on facial recognition, gait recognition, vehicle/pets/common wardrobe items in combination.
reply
QubridAI 2 hours ago
This is one of those “oh, that’s actually a real product now” demos way more interesting than yet another chat wrapper.
reply
npilk 18 hours ago
Multimodal AI will lead to an interesting arms race in ad detection vs ad insertion. I played around with AI ad removal with older Gemini models, but it seems like this would be even more powerful to instantly identify ads (and potentially mute or strip them out).

https://notes.npilk.com/experiments-with-ai-adblock

reply
sbinnee 15 hours ago
Nice article. I saw someone depicting the future of web search with AI. The conclusion was not the bright future. Simply put, ads will never go away. Either AI providers will get paid for whitelisting ads, or even worse these AI will directly promote advertised products.
reply
WarmWash 8 hours ago
People could collectively decide to start paying for stuff and most of our gripes could at least switch to providers not accommodating their customers.
reply
greesil 7 hours ago
Collective action is not our strong suit.
reply
CamperBob2 5 hours ago
To which I'd say to the advertiser, "Good luck paying off the AI adblocker running in my closet at home."

Then again, let's not be too hasty here. Let's see what you're willing to offer. I can sell you the eyeballs of the AI ad-watcher running in my closet for $10/impression. Or, for $1000/impression, you can bring your message to the attention of myself, an actual human. A bargain at any price!

reply
rigrassm 2 days ago
I picked up a Rexing dash cam a few months back and after getting frustrated with how clunky it is to get footage of it, I decided to look into building something out myself to browse and download the recordings without having to pull the SD card. While scrolling through the recordings, I explicitly remember thinking it would be nice to just describe what I was looking for and run a search. Looking forward to incorporating this into my project.

Thanks for sharing!

reply
cloogshicer 2 days ago
Could this be used for creating video editing software?

Imagine a Premiere plugin where you could say "remove all scenes containing cats" and it'll spit out an EDL (Edit Decision List) that you can still manually adjust.

reply
sohamrj 2 days ago
Yeah, this is a great idea, I’ve actually been thinking about exactly this as the next logical step.

SentrySearch already returns precise in/out timestamps for any natural-language query and uses ffmpeg to auto-trim clips. Turning that into an EDL (or even a direct Premiere plugin that exports an editable cut list) feels natural.

I’m not a Premiere expert myself, but I’d love to see this happen. If you (or anyone) wants to sketch out a quick EDL exporter or plugin, I’ll happily review + merge a PR and help wherever I can. Just drop a GitHub issue if you start something!

reply
mdrzn 2 days ago
Very interesting (not for a dashcam, but for home monitoring).
reply
SoftTalker 16 hours ago
Most home monitoring only records when there is movement though? So that already compresses the search space a lot. And just zipping forward and back it's pretty easy to quickly find the 30 seconds where there is a figure wallking up to your front door.
reply
fhe 21 hours ago
this function will be a must-have for all home security systems. I used to spend hours going through home security cameras to check if our cat went out the house when the door was accidentally left open (turned out it was just really good at hiding within the house).
reply
lwarfield 23 hours ago
Damn, I need to going with my embeddings project. I've currently got a prototype for using embeddings (not gemini in my case) for making a game that's kinda reverse connections:

collections.lwarfield.dev

reply
ideashower 4 hours ago
Is there a local model that this would work with?
reply
bob1029 13 hours ago
> Check if a video chunk contains mostly still frames. Extracts 3 evenly-spaced frames as JPEG and compares file sizes.

I believe you could use a combination of select and scene parameters in ffmpeg to do this automatically when a chunk of video is created each time.

reply
danbrooks 2 days ago
I work in content/video intelligence. Gemini is great for this type of use case out of the box.
reply
novoreorx 14 hours ago
In the demo bro shows how to search for "a car with a bike rack on the back that cut me off at night." Given the grudge he must've held from being cut off, I strongly suspect that finding this specific car was his main motivation for building the project in the first place
reply
sohamrj 8 hours ago
ur not wrong
reply
simonreiff 2 days ago
Very impressive! A webhook could be configured to trigger an alarm if a semantic match to any category of activities is detected, and then you basically have a virtual security guard and private investigator. Well played.
reply
sohamrj 2 days ago
Thanks! Yeah that would be pretty cool, but continuous indexing would be pretty expensive now, because the model's in public preview and there are no local alternatives afaik.

This very well might be a reality in a couple years though!

reply
CamperBob2 18 hours ago
Could https://qwen.ai/blog?id=qwen3-vl-embedding be a possible local alternative?
reply
bobafett-9902 2 days ago
I wonder if the underlying improvements in visual language learning will allow for even more efficient search. The First Fully General Computer Action Model -> https://si.inc/posts/fdm1/
reply
febed 17 hours ago
This seems like something that would be very expensive to run. Do you have some representative figures at a particular resolution and frame rate?
reply
addandsubtract 5 hours ago
The README on the GitHub has a section on this[0]:

>Indexing 1 hour of footage costs ~$2.84 with Gemini's embedding API (default settings: 30s chunks, 5s overlap):

>1 hour = 3,600 seconds of video = 3,600 frames processed by the model. 3,600 frames × $0.00079 = ~$2.84/hr

>The Gemini API natively extracts and tokenizes exactly 1 frame per second from uploaded video, regardless of the file's actual frame rate. The preprocessing step (which downscales chunks to 480p at 5fps via ffmpeg) is a local/bandwidth optimization — it keeps payload sizes small so API requests are fast and don't timeout — but does not change the number of frames the API processes.

[0] https://github.com/ssrajadh/sentrysearch#cost

reply
WatchDog 24 hours ago
I don't quite understand the 5 second overlap. I assume it's so that events that occur over the chunk boundary don't get missed, but is there any examples or benchmarking to examine how useful this is?
reply
sohamrj 23 hours ago
yea, it's so events on a chunk boundary still get captured in at least one chunk. i haven't had the chance to do formal benchmarks on overlap vs. no-overlap yet. the 5s default is a pragmatic choice, long enough to catch most events that would otherwise be split, short enough to not add much cost (120 chunks/hr to ~138). also it's configurable via the --overlap flag.
reply
subhashp 13 hours ago
Can I give it a photo of a person and ask it to search for the person in the video?
reply
nullbyte 2 days ago
What a brilliant idea! is this all done locally? That's incredible.
reply
apwheele 2 days ago
While the vector store is local, it is sending the data to Gemini's API for embedding. (Which if using a paid API key is probably fine for most use cases, no long term retention/training etc.)
reply
rao-v 20 hours ago
Is there a decent open video embedding model out there? I’d love to play with this without uploading video.
reply
QubridAI 2 days ago
This is a big leap true multimodal search without text bottlenecks makes video querying feel finally native and insanely practical.
reply
ygouzerh 2 days ago
That's quite interesting, well done! I haven't thought of this use case for embeddings. It open the door to quite many potential applications!
reply
stavros 2 days ago
Man, the surveillance applications for this are staggering.
reply
dev_tools_lab 2 days ago
Nice use of native video embedding. How do you handle cases where Gemini's response confidence is low? Do you have a fallback or threshold?
reply
sohamrj 2 days ago
as of now, no threshold but that is planned in the future.

for example, for now if i search "cybertruck" in my indexed dashcam footage, i don't have any cybertrucks in my footage, so it'll return a clip of the next best match which is a big truck, but not a cybertruck

reply
dev_tools_lab 12 hours ago
Makes sense for now. Thresholding becomes critical at scale though — good luck with the next iteration!
reply
cat-turner 23 hours ago
This is great, thanks for sharing
reply
kamranjon 2 days ago
Does anyone know of an open weights models that can embed video? Would love to experiment locally with this.
reply
sohamrj 2 days ago
Not aware of any that do native video-to-vector embedding the way Gemini Embedding 2 does. There are CLIP-based models (like VideoCLIP) that embed frames individually, but they don't process temporal video. you'd need to average frame embeddings which loses a lot.

Would love to see open-weight models with this capability since it would eliminate the API cost and the privacy concern of uploading footage.

reply
CamperBob2 18 hours ago
A quick search brought up https://qwen.ai/blog?id=qwen3-vl-embedding but I have no idea if it does what Gemini is doing here.
reply
jakejmnz 4 hours ago
more or less works similarly, made a proof of concept for it: https://github.com/jakejimenez/sentinelsearch
reply
sans_souse 20 hours ago
Total aside here but is that you driving the pickup I assume?
reply
sohamrj 20 hours ago
haha no i'm driving the tesla and that clip is from the left repeater camera (teslas record from all around the car)
reply
crashabr 20 hours ago
I wonder how well this would work with dance videos.
reply
totisjosema 2 days ago
What is your experience so far with the quality of the retrieved pieces?
reply
sohamrj 2 days ago
I've found I have to be very specific to get the clip I'm searching for. For example, "car cuts me off" just returned a clip of a car driving past my blindspot. But, "car with bike rack on back cuts me off at night" gave me exactly the clip I was looking for.
reply
Aeroi 2 days ago
very cool, anybody have apparent use cases for this?
reply
mannyv 23 hours ago
Indexing all your porn and skipping all the filler.
reply
iso1631 10 hours ago
isn't the "fill her" the point of porn?
reply
sohamrj 2 days ago
dashcam and home security footage are the 2 main ones i can think of.

a bit expensive right now so it's not as practical at scale. but once the embedding model comes out of public preview, and we hopefully get a local equivalent, this will be a lot more practical.

reply
giozaarour 2 days ago
I think a good use case would be searching for certain products or videos across social media (TikTok and Instagram). especially useful for shopping, maybe
reply
vidarh 2 days ago
Branding/marketing monitoring companies would be all over this.
reply
hebelehubele 2 days ago
State surveillance
reply
wahnfrieden 2 days ago
Worker surveillance
reply
CamperBob2 18 hours ago
Trail and game cams come to mind. "Create a montage of all deer encounters," "Find first appearance of black bear this year," that sort of thing.
reply
thegabriele 24 hours ago
Why just the dash cam?
reply
sohamrj 23 hours ago
dashcam is just one of the use cases and the one i tested on. but this could theoretically work with any kind of video footage like home security footage
reply
SpaceManNabs 2 days ago
> No transcription, no frame captioning, no intermediate text.

If there is text on the video (like a caption or wtv), will the embedding capture that? Never thought about this before.

If the video has audio, does the embedding capture that too?

reply
sohamrj 2 days ago
Yes to both. The embedding is over raw video frames, so anything visible (text, signs, captions) gets captured in the vector. And Gemini Embedding 2 extracts the audio track and embeds it alongside the visual frames. So a query like 'someone yelling' would theoretically match on audio. My dashcam footage doesn't have audio though, so I haven't tested that side yet.
reply
7777777phil 2 days ago
Today I learned that Gemini can now natively embed video..

Cool Project, thanks for sharing!

reply
klntsky 2 days ago
why not skip the text conversion? is it usable at all?
reply
sohamrj 2 days ago
gemini embedding 2 converts straight video to vectors. in this case, dashcam clips don't have audio to transcribe and even if they did, it would be useless in the search
reply
password4321 2 days ago
What are the SoA audio models right now?
reply
rkaliupin 21 hours ago
[dead]
reply
hikaru_ai 15 hours ago
[dead]
reply
matzalazar 2 days ago
[dead]
reply
emsign 2 days ago
Where is the Exit to this dystopia?
reply
nclin_ 2 days ago
Well, with data analysis powers like this a few treasonous words in front of a flock camera will show you the way.
reply
RobotToaster 2 days ago
In the matrix the exit was pay phones, which perhaps explains why our overlords are removing them
reply
greesil 7 hours ago
Suicide booths a la Futurama
reply
moomoo11 14 hours ago
You don’t wanna live in Night City?
reply
anxoo 24 hours ago
reply
sbinnee 15 hours ago
Thanks for sharing. They say "pause", not stop. Assume that we pause now. When should we resume then? How do we know?
reply
jama211 2 days ago
I don’t think this means we’re in a dystopia
reply
zwirbl 2 days ago
You might not have been paying attention
reply
52-6F-62 2 days ago
I think Radiohead said that
reply
draw_down 2 days ago
The dystopia of searching for video clips and finding them? What?
reply
bitexploder 24 hours ago
Yes? Right now it is relatively expensive to search video. As embedding tech like this advances and makes it even cheaper it just increases the ability to search and analyze every movement. “Locate speech patterns that indicate dissident activity using the dissident activity skill”
reply
BrokenCogs 2 days ago
The Matrix style human pods: we live in blissful ignorance in the Matrix, while the LLMs extract more and more compute power from us so some CEO somewhere can claim they have now replaced all humans with machines in their business.
reply
throwup238 2 days ago
I was thinking more of the season 3 episode of Doctor Who titled Gridlock where everyone lives in flying cars circling a giant expressway underground, while all the upper class people on the surface died years ago from a pandemic.
reply
ting0 2 days ago
Ever get the feeling that the universe is reading your mind? Maybe there's some truth to that after all.
reply