Conversation
|
Awesome! I'll take a look soon |
|
Looks pretty good to me, I think the only tricky part is say for example you have a nested object passed to a call which should not be camel-cased. In my case the user_id in We could generate some functions specific to the generated types, but I'm still not sure it's worth it |
|
Hmm probably not in that case. But should it be possible to pass object fields that are not defined in the schema through the API in the first place? I assumed the schema would prevent doing this. I.e. such use case could be covered with: |
|
At the moment it allows arbitrary objects: {
"name": "fields",
"description": "the log fields.",
"type": "object"
},Which I'm using for user-defined event fields. I didn't think about that before, but to convert between the cases we'd pretty much have to generate the conversion functions from the types I guess, feels a bit ugly though |
|
I agree about the conversion functions, but I'd argue the schema shouldn't allow arbitrary fields because the whole benefit of having a schema is that you know the fields and types. The downside is as you mentioned that the decoder/encoder would need to have knowledge of the schema which means more code but I think we should consider this because:
I'm willing to have a hack at it for TS/JS, maybe there's an elegant way to do it. |
|
Snake-case doesn't bother me here but if we're solving for timestamps anyway we might as well. I still kind of like plain objects over using Maps personally, more just from a UX perspective, from a technical perspective I definitely agree. I suppose the simplest would just be some simple encode/decode functions per-type, could be fancier but basically: function encodeGetAlertsInput(o) {
const v = {}
if (Object.hasOwnProperty.call(o, 'projectId')) {
v.projectId = o.projectId
}
return v
}
function decodeGetAlertsOutput(o) {
const v = {}
// ... map o.alerts with decodeAlert blah blah
return v
}
function decodeAlert(o) {
const v = {}
if (Object.hasOwnProperty.call(o, 'created_at')) {
v.createdAt = o.created_at
}
if (Object.hasOwnProperty.call(o, 'updated_at')) {
v.updatedAt = o.updatedAt
}
// ... other fields
return v
}And sneak the timestamp -> Date conversion in there as well. Any thoughts on a cleaner solution there? Since we're generating we might as well make it as performant as possible I guess. |
|
I have no strong preference with regards to objects vs maps, but I agree objects feel more natural. We can keep objects for now and maybe later add an option to the generator to generate maps if anyone ever needs it. I guess the only alternative to per type encode/decode functions is that we paste the schema into the generated file and have just a single decoder and encoder which would traverse it. It's not clear to me which of these approaches would be better. I would lean for what is easier to implement and to me it seems the "schema in js" approach would be easier as I can imagine the go codegen code for the other approach getting nasty. |
|
The generated approach is what I did with validation in the Go server portion and it works pretty nicely https://github.com/apex/rpc/blob/master/generators/gotypes/gotypes.go#L155-L168. The schemas can get pretty huge so I don't think that would be ideal, especially for the browser JS client. I guess the JS client would be another reason to consider just leaving the fields as-is (timestamp stuff still applies though), but the extra bulk isn't really worth it there just for camelcase IMO. |
We should use camel-case in interfaces generated from the schema as that's the ts/js convention. Implemented a decoder and encoder to convert fields in API objects to/from snake-case/camel-case.
Resolves #40