-
Notifications
You must be signed in to change notification settings - Fork 15
Description
Currently, type provider explicitly asserts proper column existance, type, and order within returned row by comparing the design time data columns to runtime version from NpgsqlDataReader.
However, in case of multiple environments, - e.q. production/feature branch/staging, column order in postgresql db can be different.
Upon generating a schema dump from production, pgdump does not take into account proper column ordering in output schema creation script for some reason (the columns in table create statement are not in same order as in db that was a target for dump).
The issue is present only in case of "SELECT * FROM ..." statements, when order of columns can not be guaranteed. It is a bad practice to use such statements however in practice some projects do use such queries.
I've checked the provider logic, and on first glance neither DataTable nor single row mapping does not depend on certain order of columns returned in a NpgsqlReader.
So I've relaxed a check inside VerifyOutputColumns to check only number of columns, presence and type of columns, that are expected from design time.
Upon swapping type provider dll it seems to work just fine, however I'm not aware of any hidden problems that could happen.
How to reproduce
- Given a pg database with table A, that has columns A1, A2
- Dump schema and modify a create table script to create columns in order A2, A1
- In design time, use you local copy of db, restored from dump, and create a simple "SELECT * FROM A" db command
- In runtime target original db
Expected:
Query was successfully executed
Actual:
Query fails to execute due to wrong column order