Migrating from PlanetScale to Neon.tech
Following PlanetScale's recent announcement about discontinuing their free plan, like many others, I found myself reevaluating other options. As a developer who heavily relied on PlanetScale's services for my hobby projects, this news initially left me disheartened. However, after exploring alternative platforms, I stumbled upon Neon, which offers a nice free tier for their postgres hosting. In this blog post, I'll walk you through my personal experience of transitioning from PlanetScale to Neon.tech.
I've picked neon.tech, because they have a free tier, and their "Launch" plan currently starts at $19, which is a bit more reasonable for hobby projects that need to scale up. Another option is CockroachDb.
Connection strings
Get your planetscale connection strings either from your apps .env or create a new password in the planetscale dashboard.
Planetscale
mysql://<USER>:<PASSWORD>@aws.connect.psdb.cloud/mydb?sslaccept=strict
Neon
If you dont have an account yet, sign up on neon.tech and create your database. If possible, use the same database name that you used on planetscale.
postgresql://<USER>:<PASSWORD>@ep-super-paper-12345.eu-central-1.aws.neon.tech/mydb?sslmode=require
Install pgloader
GitHub - dimitri/pgloader: Migrate to PostgreSQL in a single command!
Both Installing pgloader via brew
on my m1 mac and apt install
did not work for me on Ubuntu 22.04.4 LTS. So I had to build pgloader
from source.
Create a pgloader config file
Prepare neon connectionstring for pgloader
For pgloader we have to use a slightly changed connection string for our target database on neon.
Instead of user:password@...
we should use user:endpoint=<ENDPOINT>;password@...
postgresql://<USER>:endpoint=ep-super-paper-12345;<PASSWORD>@ep-super-paper-12345.eu-central-1.aws.neon.tech/mydb?sslmode=require
Prepare planetscale connectionstring for pgloader
pgloader also has different query params for the connection string. Change ?sslmode=require
, to ?sslaccept=strict
in your planetscale connection string.
mysql://<USER>:<PASSWORD>@aws.connect.psdb.cloud/mydb?sslmode=require
I have two branches on planetscale, main and dev. For my main branch I created a file config-main.load
with the following content.
make sure your file ends with a semicolon
;
Repeat this for your other branches. For my dev branch I created a file config-dev.load
with the connection strings for my dev branches.
Note
with quote identifiers, include drop, create tables, create indexes
makes our migration a lot easier. quote identifiers
preservers the case of the original column and table names. One of my tables is for example named PasswordResetToken
in prisma. It is also named like that in planetscale, and the columns on it are for example named in camelCase like userId
. The other flags are optional.
Not using with quote identifiers
will change all table and column names to lowercase, which would require changes in our prisma schema.
Migrate the data
Now we can migrate the data from planetscale to neon, by using pgloader
with our config file.
Watch out! This drops all existing data in your target tables. So make sure your target is correct and back up any existing data on your neon db.
Check data after migration
Go to the Neon Console, select the branch we migrated, and look through the data.
Update prisma.schema
Thanks to our with quote identifiers
migration, we dont have to set any @@map
mappings for the table names. The only thing we have to change is the provider
Update the connection string in your app
Update your .env.local
or .env
with the new connection string from neon.
Don't forget to update your connection string in your CI pipelines (github secrets or similar) too, once you've migrated the data.
Baseline your schema
Planetscale uses some custom handling for the prisma migrations and I didnt find a way to export those. For neon.tech, prisma will create a _prisma_migrations
table, where all migrations and their hashes are stored, so prisma knows which migrations have been run.
If you dont supply explicit types in your prisma schema, prisma will infer the types from the database. So we have to baseline our schema to make sure prisma uses the correct types.
This will update your prisma.schema
with the "real" types currently used in your database. For example an id
column was varchar(191)
on planetscale, as this is the default db type for mysql when using prisma's String
type. The default for postgres would be text
.
We can now step by step remove the custom mappings and types from our schema, and run migrations to make sure everything still works, without losing any data.
Create a baseline
migration folder. This will hold the create table
statements for all our tables.
Then generate an sql script from our pulled prisma.schema
.
For whatever reason my migration.sql
file had these two lines at the top. Since these aren't valid sql they would break our migration, so I removed them.
Thats the state our db is in already, so we dont have to run this migration. But we have to tell prisma that our db already is in this state.
Removing all the custom map: ...
and @db...
annotations from our schema, and running pnpm prisma migrate dev
resulted in prisma wanting to drop all tables, which would mean losing all data. Instead i split the changes into smaller migrations.
First I removed all @db.VarChar(191)
and then created a new migration with this small change.
Then I removed db.Timestampz(6)
and created another migration.
I repeated this for the custom map
annotations until I ended up with the schema.prisma
I had before moving the data from planetscale to neon. This should leave you with a database that has the recommended types, without any lost data.
Since this is the real "baseline" of our schema, I opted to delete all migrations again, and create another baseline from the state of our migrated database.
I deleted all rows from my db's _prisma_migrations
table.
Then I removed the migrations folder locally.
And created a new baseline migration, just like we did before
It's a bit hacky and you might be better off just creating a new db and migrating your data there, but this worked for me.