-
-
Notifications
You must be signed in to change notification settings - Fork 174
Updates for postgREST 11 #417
Conversation
This doesn't cover the DX shortcoming in the general case - we'd still have issues with nested filters. I prefer if we properly solve this in v3, we can rework the filtering approach to be more of a fluent/builder pattern, e.g. Prisma. |
|
I'm not sure about supporting any/all for Looks useful for |
Yeah agree, so not needed there.
Agree with
|
|
🎉 This PR is included in version 1.6.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
|
@soedirgo One advantage to using With that, maybe we could reconsider adding a |
@soedirgo Could you put an example of why it would fail with nested filters? It should work as: .eq('nested.id', 1, {negate: true})right? |
We'll need some form of escaping for both
Sorry, I meant filters like |
| values: Row | Row[], | ||
| { | ||
| count, | ||
| defaultToNull = true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@steve-chavez @soedirgo
Would this be a breaking change? The behavior prior to this was defaultToNull = false, where the missing fields get their default value specified on the table definition, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so.
From the postgREST doc:
Any missing columns in the payload will be inserted as null values. To use the DEFAULT column value instead, use the
Prefer: missing=defaultheader.
And the header is only set if defaultToNull is false here. So I think that's correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dshukertjr As Vinzent said, there's no breaking change.
defaultToNull was always true before. I would have liked it to make it the default behavior on PostgREST but it would have caused a breaking change (plus right now it also has a bit of perf loss).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see that this is only applicable when you are inserting multiple rows in bulk huh? I was thinking it applies to when you are inserting a single row as well. Maybe we could add a note about that in the comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually it also applies when inserting a single row. But this only takes effect when specifying &columns=..., both for single & bulk inserts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I think you're right, columns is only set when doing a bulk insert 😬 I'll add a comment about this and make the behavior consistent in v3. Thanks for the catch!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So there is currently a difference in inserting a single row as an object and a single row in an array with length 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It wasn't intentional, but yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried that and I think they are equal. I ran the following code in the test db of postgrest-js and other than the actual value of username they are equal:
let res1 = await postgrest.from('users').insert({ username: 'bot1' }).select()
let res2 = await postgrest
.from('users')
.insert([{ username: 'bot2' }])
.select()
console.log(res1)
console.log(res2)Especially is status = ONLINE, which is the default value of that column. So in both cases the default column is used and not null.
I ran further test understood it now. The missing fields are only mapped to null if a field is missing in one row, but present in another, because then that column is listed in &columns=. To use the default value in that case as well, you have to use the defaultToNull=false flag. For missing fields in a bulk insert with only one row, all fields are listed in &columns= and therefore the missing fields are mapped to the default.
So there is a difference in fields missing in all rows and fields missing in only a proper subset.
I just want to really understand that to properly document and implement it in postgrest-dart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Vinzent03 I think that's correct - so there isn't a difference between single row insert vs. bulk insert with 1 row after all, sorry for the back and forth.
undefinedToDefaultoption for theinsert()andupsert()methods.?columnsand aPreferheader PostgREST/postgrest#2672Prefer: missing=defaultPostgREST/postgrest#2723modifier: 'any' | 'all'option to theeq,like,ilike,gt,gte,lt,ltefilterssupabase.from('countries').select().eq('id', [1,2,3,4,5,6,7,8], {modifier: 'any'})oroperations. It makes a filter accept an array.This also makes me think about not, which doesn't have good DX. Maybe we can add another modifier to all operators like
.eq('id', 1, {negate: true})