Campaign Focus: Target high-scale developer platforms and content delivery companies burning cash on AWS S3/CloudFront egress. Position R2 as cost-effective with zero egress.
Target Persona: CTO / Head of Infrastructure (likely Martin Šašek or equivalent)
Why this angle works: Seznam operates 30+ consumer services (Mapy.cz, Stream.cz, Sbazar, Novinky, Email) with massive content delivery needs. Mapy.cz alone competes with Google Maps in CZ — millions of map tiles daily. Stream.cz delivers video. You don't need to guess if they have egress costs — delivering tiles, video, and API responses to 6M+ daily users is one of their largest operational expenses. They historically built in-house infrastructure, but edge offload for non-differentiated delivery is a proven optimization.
Source: Seznam operates Mapy.cz (tile servers), Stream.cz (video CDN), and numerous API-driven services. Public job postings show infrastructure team hiring for "vysoká zátěž" (high load) systems.
Subject line: "Map tiles + video delivery — edge offload question for Seznam infra team"
Draft email:
Hi [Name],
Seznam built something remarkable — a homegrown internet ecosystem that competes with global giants. Mapy.cz tiles, Stream video, 30+ services running at national scale.
I'm curious about your infrastructure strategy for content delivery. Companies at similar scale (e.g., Shopify delivering to millions of merchants) have moved map tiles, video segments, and API responses to edge storage + compute to free engineering resources for core product work.
Would you be open to a 15-minute conversation about how Seznam thinks about edge infrastructure? No pitch — genuinely curious about your approach.
Best,
[Your name]
Target Persona: CTO / Platform Architect (Marek Trunkát or Jan Čurn)
Why this angle works: Apify stores scraped datasets for 25,000+ customers. Every customer who downloads their scraped data triggers egress charges. At Series B scale with 25K customers, this isn't hypothetical — it's a real cost that grows linearly with customer count. Their platform is cloud-native (AWS visible in architecture), meaning storage + egress is a significant COGS line item. As they scale toward 100K customers, unoptimized storage/egress becomes a margin killer.
Source: Apify stores Actor run outputs and datasets in cloud storage. Their pricing page shows dataset storage and API access as core platform features. Series B (2022) from J&T Ventures and others signals scaling pressure.
Subject line: "Dataset delivery costs at 25K customers — Apify infrastructure question"
Draft email:
Hi [Name],
Congrats on the Series B and 25K customers. Scaling a scraping platform means you're storing and serving massive datasets back to users.
I'm curious about your infrastructure cost curve as you grow from 25K to 100K customers. Specifically: how do you think about the cost of dataset storage and delivery? I've seen similar platform companies discover that storage egress becomes their second-largest infrastructure cost — and it's invisible until it isn't.
Cloudflare R2 has zero egress fees, which changes the economics entirely for data-heavy platforms. Would you be open to a quick conversation about how Apify thinks about storage economics at scale?
Best,
[Your name]
Target Persona: CTO / VP Platform Engineering
Why this angle works: Make (formerly Integromat) is an automation platform acquired by Celonis. Every workflow execution triggers API calls, data transfers, and webhook deliveries. At "millions of workflows" scale, the volume of data moving between Make's servers and third-party APIs (Slack, Salesforce, Google Sheets, etc.) is enormous. Unlike a content site where you can CDN-cache static assets, Make's product IS real-time API orchestration. Every webhook payload, every file transfer, every scenario execution generates data egress. As they scale from millions to billions of executions, egress becomes a dominant infrastructure cost.
Source: Make is an automation platform connecting 1,000+ apps. Celonis acquisition (2020) signaled enterprise scaling. Workflow execution volume is their core metric — each execution involves multiple API calls and data transfers.
Subject line: "Workflow execution costs at scale — Make infrastructure question"
Draft email:
Hi [Name],
Make processes millions of workflow executions daily — connecting 1,000+ apps through APIs, webhooks, and data transfers.
I'm curious about your infrastructure economics as you scale. Each workflow execution triggers multiple API calls and data transfers between Make's servers and third-party services. At Celonis-scale enterprise volume, the cumulative data egress becomes a significant cost driver.
Companies at similar orchestration scale (Zapier, Workato) have optimized by moving intermediate data storage to edge platforms with zero egress fees — cutting data transfer costs by 60-70% without changing their architecture.
Would you be open to a brief conversation about how Make thinks about execution infrastructure economics?
Best,
[Your name]
Target Persona: CTO / VP Engineering
Why this angle works: Recombee processes 1 billion+ recommendation API requests daily. Each request returns a payload of recommended items (product IDs, metadata, images). At 1B requests/day with even a 5KB average payload, that's 5TB of data egress daily — 150TB/month. Their entire business model is API delivery. Unlike a web app where you can optimize frontend, Recombee's product IS the API response. Every optimization to response delivery directly improves margins.
Source: Recombee publicly states "1 billion+ recommendations per day" on their website. Their API returns JSON payloads with product metadata and image URLs. Typical recommendation payload is 5-20KB depending on result count.
Subject line: "1B API responses/day — edge caching for recommendation payloads?"
Draft email:
Hi [Name],
Recombee processes 1B+ recommendations daily — that's a remarkable scale for a Czech-founded company.
I was thinking about your infrastructure challenge: every recommendation API call returns a payload that travels across the internet to your customers' servers. At billion-request scale, even small optimizations to response delivery (edge caching, compression, smarter routing) compound into significant cost savings.
Companies like Shopify handle similar API volume by caching recommendation results at the edge, reducing origin egress by 60-80%. I'd love to understand how Recombee thinks about API delivery economics — would you have 15 minutes for a conversation?
Best,
[Your name]
Target Persona: CTO / Founder (Petr Pridal)
Why this angle works: MapTiler is an open-source-first geospatial platform serving 500M+ map tiles daily. Map tiles are small (10-50KB each) but requested in massive quantities — a single map view loads 20-40 tiles. At 500M tiles/day with 25KB average, that's 12.5TB of data egress daily, or ~375TB/month. Their business model is API-based: developers pay per tile request. The cost to serve those tiles directly impacts margins. MapTiler competes with Google Maps Platform and Mapbox — both of which have raised prices multiple times. MapTiler's open-source DNA (built on OpenMapTiles) means they're cost-conscious and infrastructure-savvy.
Source: MapTiler publicly states "500M+ tiles served daily." Map tiles average 10-50KB. Competitors Google Maps and Mapbox have raised API prices 2-3x in recent years. MapTiler is built on OpenMapTiles open-source project.
Subject line: "375TB monthly tile egress — MapTiler infrastructure economics"
Draft email:
Hi Petr,
MapTiler has built an incredible open-source geospatial platform — 500M tiles daily is serious scale.
I was doing some math: at ~25KB per tile, you're serving roughly 375TB of map data monthly. That's a massive infrastructure cost, especially as you compete with Google Maps and Mapbox who keep raising their prices.
I'm curious about your tile delivery strategy. Companies at similar scale have moved tile storage to edge object storage with zero egress fees — cutting delivery costs by 70-80% while improving global performance. The OpenMapTiles architecture maps well to this approach.
Would you be open to a conversation about MapTiler's infrastructure strategy?
Best,
[Your name]
Target Persona: Founder / CTO (Ivo Lukáčovič)
Why this angle works: Windy.com is a weather visualization platform with 50M+ users founded by Ivo Lukáčovič (who also founded Seznam). The app delivers map tiles, GRIB weather data, and model outputs globally. At 50M users requesting animated weather layers, the data transfer is enormous. Unlike text-based APIs, weather visualization requires continuous tile delivery. The product is beloved by sailors, pilots, and meteorologists — but the infrastructure bill to serve global weather data is substantial. Ivo is a technical founder who cares about performance and independence.
Source: Windy.com publicly reports 50M+ users. The app delivers animated weather map tiles (WebGL/Canvas) and GRIB data. Ivo Lukáčovič is a well-known Czech tech founder (Seznam.cz founder) who actively discusses infrastructure on social media.
Subject line: "Weather tile delivery at 50M users — infrastructure question for Ivo"
Draft email:
Hi Ivo,
Windy is the best weather visualization product I've ever used — and I've tried them all as a sailor.
I'm curious about your infrastructure strategy for global tile delivery. Serving animated weather layers to 50M users means massive data transfer, especially during storm seasons when usage spikes 3-5x.
I work with companies at similar scale who've moved weather tile delivery to edge networks — reducing origin load by 70% while improving load times in APAC and South America. Would you be open to a brief conversation about how Windy thinks about global delivery?
Best,
[Your name]
Target Persona: CTO / ML Platform Lead
Why this angle works: Ximilar provides visual AI APIs (fashion tagging, image search, visual similarity) to e-commerce companies. Their API accepts images (uploaded by clients) and returns AI-generated tags, embeddings, or similar products. The workflow involves: (1) client uploads image to Ximilar, (2) Ximilar processes via ML model, (3) results are returned. Image uploads + model weights + result payloads create significant data transfer. E-commerce clients send high-resolution product images (1-5MB each). At 25M+ users processing thousands of images daily, storage and egress scale linearly. Ximilar's pricing is per API call — so infrastructure costs directly impact unit economics and competitive pricing.
Source: Ximilar provides visual AI APIs for fashion and e-commerce. Their API processes product images (typically 1-5MB). Pricing is per API call. Visual AI inference requires GPU compute + image storage + result delivery.
Subject line: "Image API economics at 25M users — Ximilar infrastructure question"
Draft email:
Hi [Name],
Ximilar's visual AI platform is impressive — fashion tagging, visual search, and similarity matching at 25M+ user scale.
I'm thinking about your infrastructure economics. Your clients upload product images (1-5MB each) for AI processing, then you return tags, embeddings, or similar products. Each API call involves image storage, GPU inference, and result delivery.
At scale, image storage and egress become significant cost drivers — especially since your pricing is per API call. Optimizing image storage and delivery (edge caching, compressed formats, zero-egress storage) directly improves your unit economics and lets you price more competitively against Amazon Rekognition and Google Vision.
Worth a brief conversation?
Best,
[Your name]
Target Persona: CTO / VP Engineering
Why this angle works: Smartlook records user sessions as video-like data streams for replay and heatmaps. Each session generates continuous screen capture data that must be stored and served on demand. Unlike static analytics (page views, events), session replay data is video-like — it doesn't compress well and can't be effectively CDN-cached because each session is unique. When product managers watch replays, data egress occurs for every view. At enterprise scale with thousands of sessions recorded and watched daily, this creates massive storage and egress costs that grow linearly with usage.
Source: Smartlook provides session replay and heatmap analytics. Session replay involves continuous screen capture data. Enterprise customers record thousands of sessions daily. Competitor Hotjar has publicly discussed infrastructure challenges at scale.
Subject line: "Session replay egress costs — Smartlook infrastructure question"
Draft email:
Hi [Name],
Smartlook's session replay technology is powerful — but I'm curious about the infrastructure economics.
Session replay data is essentially video-like streams that don't compress well and can't be CDN-cached (each session is unique). Every time a product manager watches a replay, you're serving that data again. At enterprise scale with thousands of daily replays, egress becomes a significant cost driver.
Companies like Hotjar have optimized session storage by moving to platforms with different pricing models for video-like data. The result: 50%+ cost reduction without changing the product experience.
Would you be open to a brief conversation about Smartlook's infrastructure strategy?
Best,
[Your name]
Target Persona: CTO / VP Engineering (Václav Hodek or equivalent)
Why this angle works: Localazy is a software localization platform that serves translated content (JSON, YAML, XML, ICU strings) to mobile apps and web applications via API and CDN. Every time a developer pulls translations or an app updates its language pack, Localazy serves file data from storage. At scale with thousands of apps and millions of end-users, translation file delivery becomes a significant egress cost. Their "over-the-air" updates feature pushes translation updates directly to live apps — this is constant, recurring data transfer. As they scale from startup to platform, optimizing delivery costs directly improves margins on their freemium and pay-per-usage model.
Source: Localazy provides software localization with CDN delivery and over-the-air updates. Translation files for mobile apps typically range from 50KB-500KB per language. Their pricing includes "CDN for fast delivery" — indicating delivery is a core infrastructure cost.
Subject line: "Translation delivery costs at scale — Localazy infrastructure question"
Draft email:
Hi [Name],
Localazy has built an excellent developer-focused localization platform — the over-the-air updates feature is particularly smart for mobile teams.
I'm thinking about your infrastructure economics as you scale. Every app pulling translation files via your CDN, plus over-the-air updates pushing to millions of devices, creates a data delivery cost that grows linearly with your customer base.
For platforms with similar delivery profiles (mobile asset delivery, config file distribution), moving to zero-egress object storage has cut delivery costs by 70-80% without changing the developer experience. The S3-compatible API means minimal engineering work.
Would you be open to a brief conversation about how Localazy thinks about localization delivery infrastructure?
Best,
[Your name]
Target Persona: CTO / VP Engineering
Why this angle works: Satismeter runs in-app NPS and satisfaction surveys, collecting feedback from end-users and serving survey widgets to web and mobile apps. Their platform stores survey responses, user metadata, and analytics data — then serves aggregated dashboards and raw data exports back to customers. Multi-tenant SaaS means each customer's data is isolated but served from shared infrastructure. When customers export large datasets or view detailed response analytics, significant data egress occurs. Satismeter also delivers survey widgets (JavaScript snippets) to millions of end-user browsers. The combination of widget delivery, response storage, and dashboard data transfer creates a storage + egress cost structure that gets expensive as they add enterprise customers with large user bases.
Source: Satismeter provides in-app NPS and CSAT surveys. Their platform stores survey responses and serves analytics dashboards. Enterprise customers include SaaS companies with large user bases. Multi-tenant SaaS architectures typically face storage/egress scaling challenges at enterprise tier.
Subject line: "Survey data delivery economics — Satismeter infrastructure question"
Draft email:
Hi [Name],
Satismeter's approach to in-app NPS and CSAT is elegant — surveys that feel native to the product experience.
I'm curious about your infrastructure economics as you move upmarket. Survey widgets served to millions of end-users, plus enterprise customers exporting large response datasets and viewing detailed analytics, creates a data delivery cost that scales with customer size.
Similar feedback platforms have found that storage and egress become 20-30% of infrastructure costs at enterprise scale. Moving response data and widget assets to zero-egress storage changes the unit economics — especially when serving high-volume enterprise accounts.
Would you be open to a brief conversation about how Satismeter thinks about data delivery infrastructure?
Best,
[Your name]
Target Persona: CTO / Head of Engineering
Why this angle works: Glami is a fashion search engine and aggregator operating across 20+ European markets. They index millions of products from hundreds of e-shops, storing and serving high-resolution product images, structured product data, and search results. A single search result page on Glami displays 20-40 product images. With millions of monthly visits and users browsing multiple pages, image delivery is their dominant infrastructure cost. Fashion e-commerce images are large (high-resolution photos on white background, detail shots, lifestyle images) — typically 100-300KB each. At their scale, image CDN egress alone likely runs tens of terabytes monthly. Additionally, their API serves product feeds to partner shops and affiliate networks. Every optimization to image delivery and API response caching directly improves margins.
Source: Glami operates as a fashion aggregator across CZ, SK, and 18+ other European markets. Their platform indexes products from 1,000+ e-shops. Typical fashion product images are 100-300KB. Search result pages display 20-40 products with multiple images each.
Subject line: "Fashion image delivery across 20 markets — Glami infrastructure economics"
Draft email:
Hi [Name],
Glami has built the leading fashion search platform in Central Europe — 20+ markets, millions of products from thousands of e-shops.
I was thinking about your infrastructure challenge: every search result page displays 20-40 high-resolution product images. Fashion imagery is large (100-300KB per image), and with millions of monthly visits, image delivery becomes a dominant cost center.
Similar aggregators (Lyst, Stylight) have optimized image delivery by moving to edge storage with smart compression and format optimization — cutting delivery costs by 60% while improving page load times. The S3-compatible API makes migration straightforward.
Would you be open to a conversation about how Glami thinks about image delivery infrastructure across European markets?
Best,
[Your name]
Target Persona: CTO / Head of E-commerce Technology
Why this angle works: Bonami is a curated home decor and furniture e-commerce platform operating across CZ, SK, PL, and RO. Their product catalog relies heavily on visual storytelling — large lifestyle images showing products in styled room settings, detail shots, and 360-degree views. Home decor images are among the largest in e-commerce (often 500KB-2MB for high-resolution lifestyle photography). Bonami runs seasonal campaigns and flash sales that drive traffic spikes 5-10x normal levels. During these spikes, CDN egress costs spike proportionally. Their business model depends on visual inspiration — customers browse lookbooks, room ideas, and curated collections, loading dozens of images per session. Unlike commodity e-commerce, Bonami can't simply compress images aggressively without degrading the brand experience. They need smart image optimization (WebP/AVIF conversion, responsive sizing) combined with cost-efficient delivery.
Source: Bonami operates in CZ, SK, PL, and RO as a curated home decor marketplace. Their product pages feature large lifestyle imagery. Flash sales and seasonal campaigns (Black Friday, spring refresh) drive significant traffic spikes. Home decor lifestyle images are typically 500KB-2MB.
Subject line: "Seasonal campaign image delivery — Bonami infrastructure question"
Draft email:
Hi [Name],
Bonami has created a beautiful curated shopping experience — the lifestyle imagery and room inspiration sets you apart from commodity marketplaces.
I'm thinking about your infrastructure during peak periods. Home decor lifestyle images are large (500KB-2MB), and seasonal campaigns drive 5-10x traffic spikes. That means your image delivery costs also spike 5-10x — right when margins matter most.
Companies like West Elm and Made.com have tackled this by moving image delivery to edge platforms with automatic format optimization (WebP/AVIF) and zero egress fees. They cut delivery costs by 50-60% while improving load times — without degrading visual quality.
Would you be open to a brief conversation about how Bonami handles image delivery during peak campaigns?
Best,
[Your name]
Target Persona: VP Engineering / Head of Marketplace Infrastructure
Why this angle works: Allegro is the largest e-commerce marketplace in Central and Eastern Europe, serving 14M+ active buyers and 135K+ sellers. Their platform handles: (1) product images — 135K sellers upload multiple images per product, resulting in tens of millions of images; (2) seller APIs — sellers integrate via API for inventory management, order processing, and pricing updates; (3) search and recommendation APIs — powering product discovery across millions of SKUs. Every API call, every image load, every feed download generates data egress. At marketplace scale, these costs are enormous. Allegro's acquisition of Mall Group expanded their infrastructure footprint across CZ, SK, HU, and PL — multiplying the delivery surface area. Unifying storage and delivery infrastructure across markets creates both cost savings and operational consistency.
Source: Allegro reports 14M+ active buyers and 135K+ sellers. Acquired Mall Group in 2022, expanding across CZ/SK/HU/PL. Marketplace platforms typically store 10-50M+ product images. Seller API integrations involve constant data exchange for inventory, orders, and pricing.
Subject line: "Marketplace delivery economics — Allegro infrastructure question"
Draft email:
Hi [Name],
Allegro's scale is remarkable — 14M+ buyers, 135K+ sellers, and now multi-market operations after the Mall Group acquisition.
I'm thinking about the infrastructure complexity of operating a marketplace at this scale. Tens of millions of product images, seller APIs for inventory and order management, plus search and recommendation services — all generating massive data transfer.
Marketplaces like eBay and Etsy have optimized by unifying storage and delivery infrastructure across markets, cutting data transfer costs by 40-50% while improving API response times globally. The S3-compatible migration path makes this achievable without architectural overhaul.
Would you be open to a conversation about how Allegro thinks about marketplace infrastructure across CEE markets?
Best,
[Your name]
Target Persona: CTO / Head of Product Engineering
Why this angle works: Slevomat is a daily deals and experience marketplace operating across CZ, SK, and PL (as Experto). Their business model involves: (1) high-volume email campaigns — daily deal newsletters with rich imagery sent to millions of subscribers; (2) deal pages with large lifestyle images (restaurants, spa visits, travel packages); (3) mobile app API serving deals, vouchers, and merchant data. Daily deals are inherently time-sensitive and image-heavy — customers buy based on visual appeal. Every email open loads images from CDN. Every app browse loads deal photos. Every voucher redemption hits the API. The business runs on high-frequency, image-heavy communication. Additionally, Slevomat has faced margin pressure as the daily deals market matured — making infrastructure cost optimization a real priority for maintaining profitability.
Source: Slevomat operates daily deals platforms across CZ, SK, and PL. Their business relies on daily email newsletters to millions of subscribers. Deal imagery (restaurants, travel, experiences) is central to conversion. Daily deals market has matured, creating margin pressure across the industry.
Subject line: "Daily deal image delivery — Slevomat infrastructure economics"
Draft email:
Hi [Name],
Slevomat has dominated the daily deals market across CZ, SK, and PL — millions of subscribers opening deal emails and browsing experiences every day.
I'm thinking about your infrastructure economics. Daily deal businesses are uniquely image-intensive: every email newsletter loads multiple lifestyle photos, every deal page shows high-res imagery, and your app serves images to mobile users constantly. At daily frequency, this delivery cost compounds fast.
Similar experience marketplaces (Groupon, Travelzoo) have reduced image delivery costs by 50-60% by moving to edge storage with smart compression and zero egress fees — without changing the email or app experience.
Would you be open to a brief conversation about how Slevomat thinks about deal delivery infrastructure?
Best,
[Your name]
Target Persona: CTO / Co-founder
Why this angle works: Mapotic is a community-driven mapping platform that lets organizations and communities create custom interactive maps. Users upload geospatial data, photos, points of interest, and custom map layers. Unlike static map platforms, Mapotic handles user-generated content: photos, 3D models, GPS tracks, and community contributions. Every map view loads tiles, markers, images, and user contributions. Communities range from tourism boards to NGOs to event organizers — each with their own map data and media. As the platform scales, the combination of map tiles, user-generated images, and API data creates a storage and egress profile similar to social media platforms. Mapotic's open, community-driven model means data grows organically and unpredictably — making cost-efficient storage and delivery essential for sustainable growth.
Source: Mapotic provides community mapping platform for tourism, NGOs, and events. User-generated content includes photos, GPS tracks, and custom map layers. Community-driven platforms typically see organic, unpredictable data growth. Map tiles + user media create compound storage/egress costs.
Subject line: "Community map delivery economics — Mapotic infrastructure question"
Draft email:
Hi [Name],
Mapotic's community-driven approach to mapping is refreshing — giving tourism boards, NGOs, and local communities the tools to create meaningful interactive maps.
I'm thinking about your infrastructure challenge. Community maps are dynamic: users upload photos, GPS tracks, and custom layers continuously. Every map view serves tiles, markers, and media to visitors. As communities grow, storage and delivery costs grow with them — often unpredictably.
Community platforms like Strava and AllTrails have tackled this by moving user-generated content and map data to edge storage with zero egress fees. They handle organic growth without proportional cost increases — freeing resources to invest in platform features instead of infrastructure.
Would you be open to a brief conversation about how Mapotic thinks about scaling community map infrastructure?
Best,
[Your name]
Hook: Lead with egress cost savings. "You're paying $X/month for data transfer. R2 is free."
Proof Points: Zero egress fees, S3-compatible API, 11 nines durability, Workers integration.