Show HN: Sync’ing data to your customer’s Google Sheets Hey HN! Charles here from Prequel (https://prequel.co). We just launched the ability to sync data from your own app/db/data warehouse to any of your customer’s Google Sheets, CSV, or Excel – and I wanted to share a bit more about how we built the Google Sheets integration. If you’re curious, see here for a quick GIF demo of our Google Sheets destination: https://ift.tt/QBupqCS. Quick background on us: we make it easy to integrate with and sync data to data warehouses. Problem is, there are plenty of folks who want access to their data, but don’t have or don’t know how to use a data warehouse. For example, FP&A teams, customer success teams, etc. To get around that, we added some non-db destinations to Prequel: Google Sheets, CSV, and Excel. We had to rework some core assumptions in order to get Google Sheets to work. By default, Prequel does incremental syncs, meaning we only write net new or updated data to the destination. To avoid duplicate rows, we typically perform those writes as upserts – this is pretty trivial in most SQL dialects. But since Google Sheets is not actually a db, it doesn’t have a concept of upserts, and we had to get creative. We had two options: either force all Google Sheets syncs to be “full refreshes” every time (eg grab all the data and brute-force write it to the sheet). The downside is, this can get expensive quickly for our customers, especially when data gets refreshed at higher frequencies (eg every 15 minutes). The other, and better, option was to figure out how to perform upserts in Sheets. To do so, we read the data from the sheet we’re about to write to into memory. We store it in a large map by primary key. We reconcile it with the data we’re about to write. We then dump the contents of the map back to the sheet. In order to make the user experience smoother, we also sort the rows by timestamp before writing it back. This guarantees that we don’t accidentally shuffle rows with every transfer, which might leave users feeling confused. “Wait, you keep all the data in memory… so how do you avoid blowing up your pods?”. Great question! Luckily, Google Sheets has pretty stringent cell / row size limits. This allows us to restrict the amount of data that can be written to these destinations (we throw a nice error if someone tries to sync too much data), and thereby also guarantees that we don’t OOM our poor pods. Another interesting problem we had to solve was auth: how do we let users give us access to their sheets in a way that both feels intuitive and upholds strong security guarantees? It seemed like the cleanest user experience was to ask the spreadsheet owner to share access with a new user – much like they would with any real human user. To make this possible without creating a superuser that would have access to _all_ the sheets, we had to programmatically generate a different user for each of our customers. We do this via the GCP IAM API, creating a new service account every time. We then auth into the sheet through this service account. One last fun UX challenge to think through was how to prevent users from editing the “golden” data we just sync’d. It might not be immediately clear to them that this data is meant as a source of truth record, rather than a playground. To get around this, we create protected ranges and prevent them from editing the sheets we write to. Sheets even adds a little padlock icon to the relevant sheets, which helps convey the “don’t mess with this”. If you want to take it for a spin, you can sign up on our site or reach us at hello (at) prequel.co. Happy to answer any other questions about the design! January 27, 2023 at 09:15AM
Comments
Post a Comment