It appears the input is being stringified again. Am I missing something here?
Here's an example of using a json data type & doing a round-trip query from node -> postgres -> and back to node with the json type preserved:
require('pg').connect( (error, client, done) => {
if (error) throw error
client.query('select $1::json as arr', [JSON.stringify([{foo: 'bar'}])
], (error, result) => {
done()
if (error) throw error
console.log(result.rows[0].arr) // => [{ foo: 'bar' }]
process.exit(0)
@jrf0110 thanks for this. I've confirmed this does work on insert as well. It appears something is double-stringifying my input.
Appreciate the help, fellas!
If you need to pass in an array of objects as a single JSON value then do what @jrf0110 has done and call JSON.stringify
on the parameter yourself before adding it to your array of parameters. node-postgres ignores any array parameter that's a string, passing it directly onto the wire and to the backend. You might need to cast in your query.
someoneinatree, offero, AntonNarkevich, forivall, oakgary, Bessonov, bodinsamuel, huntj88, Hellek, Venryx, and 6 more reacted with thumbs up emoji
offero, huntj88, DanielFGray, spiffytech, dreamyguy, and janeszelag reacted with heart emoji
All reactions
… conversion between object and text, binary and hexBinary
Yadamu: Enable TABLES parameter to be used to limit operations to a specific subset of the tables in the schema
Yadamu: Added support for TABLES, WAREHOUSE and ACCOUNT command line parameters
Yadamu: Refactor DEFAULT handling and PARAMETERS as GETTERS
DBReader: pass cause to forcedEnd()
DBWriter: Use await when calling dbi.setMetadata()
YadamuLibrary: Add Boolean Conversion utilities
YadmauLogger: Disable fileWriter column count check
YadamuRejectManager: Disable fileWriter column count check
YadamuDBI: Standardized naming conventions for SQL Statements used by driver
SQL_CONFIGURE_CONNECTION
SQL_SYSTEM_INFORMATION_SCHEMA
SQL_GET_DLL_STATEMENTS
SQL_SCHEMA_INFORMATION
SQL_BEGIN_TRANSACTION
SQL_COMMIT_TRANSACTION
SQL_ROLLBACK_TRANSACTION
SQL_GET_DDL_STATEMENTS
SQL_CREATE_SAVE_POINT
SQL_RESTORE_SAVE_POINT
YadamuDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuDBI: Add incoming Spatial format information to table metadata in getTableInfo()
YadamuDBI: All drivers Set transaction state before performing commit and rollback operations
YadmuDBI: Remove forceEndOnInputStreamError()
YadamuDBI: Refactor decomposeDateType => YadamuLibrary
YadamuDBI: Refactor decomposeDateTypes => YadamuLibrary
YadamuDBI: Add support for table name filtering via TABLES parameter
YadamuParser: remove objectMode argument from constuctor and all descendant classes
YadamuParser: Use Object.values() to Pivot from Object to Array
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Disable column count check once skipTable is true
YadamuWriter: FlushCache() Only commit or rollback if there is an active transaction
YadamuWriter: FlushCache() Skip writing pending rows if skipTable is true
YadamuQA: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuQA: Standardize test names across export, import, fileRoundtrip, dbRoundtrip and lostConnection configurations
YadamuQA: Abort test operation when step fails.
YadamuQA: Enable integration of LoaderDBI by using dynamic driver loading to load FileDBI
YadamuQA: Fixed Accumulators
YadamuQA: Added Unload Testing Framework to Export
ExampleDBI: Add ExampleConstants class
ExampleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
ExampleParser: Use Object.values() to Pivot from Object to Array
ExampleParser: Use transformation array
ExampleParser: Do not run transformations unless one or more transformation are defined
FileDBI: Add DataType mapping to talbleInfo
FileDBI: Add SpatialFormat to tableInfo
FileDBI: Wrap calls to fs.createReadStream() in a promise
FileDBI: Remove source information from Metadata before writing to file
FileWriter: Use transformation array for Buffer and Date conversions
FileWriter: Do not run transformations unless one or more transformations are defined
MySQLDBI: Add MySQLConstants class
MySQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MySQLDBI: Remove JSON_ARARY operator from SELECT statements
MySQLDBI: Return binary data types as Buffer, not HexBinary String
MySQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Map SET columns to JSON when generating DML statements
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in "schemaInformation" query
MySQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MySQLDBI: Map tinyint(1) to boolean;
MySQLDBI: Map Postgres "jsonb" data type to JSON.
MySQLDBI: Add Snowflake data type mappings
MySQLParser: Use Object.values() to Pivot from Object to Array
MySQLParser: Use transformation array for JSON & SET conversions
MySQLParser: Return JSON as object
MySQLParser: Return SET column as JSON array
MySQLParser: Do not run transformations unless one or more transformation are defined
MySQLParser: SET columns are automatically converted to JSON by the driver, no transformation required
MySQLParser: Do not run transformations unless one or more transformation are defined
MySQLWriter: Use YadamuSpatialLibary to recode entire batch to WKT on WKB insert Error
MySQLWriter: Remove direct use of WKX package
MySQLQA: Cast SET columns in the source table to JSON when comparing results
MariaDBI: Add MariadbConstants class
MariaDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MariaDBI: Remove JSON_ARARY operator from SELECT statements
MariaDBI: Return binary data types as Buffer, not HexBinary String
MariaDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MariaDBI: Map SET columns to JSON when generating DML statements
MariaDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MariaDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MariaDBI: Use rowsAsArray option at connection time
MariaDBI: Join with information_schema.check_constraints to identify JSON columns
MariaDBI: Map tinyint(1) to boolean;
MariaDBI: Map Postgres "jsonb" data type to JSON.
MariaDBI: Fetch float and double as string
MariaParser: Use transformation array for JSON
MariaParser: Return JSON as object
MariaQA: Cast SET columns in the source table to Pseudo JSON when comparing results
MsSQLDBI: Add MsSQLConstants class
MsSQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MsSQLDBI: Map MySQL SET columns to JSON when generating DML statements
MsSQLDBI: Fixed parameters names used in reportTransactionState()'
MsSQLDBI: Fix mapping of Oracle data type "MDSYS.SDO_GEOMETRY"
MsSQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MsSQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MsSQLDBI: Wrap calls to fs.createReadStream() in a promise
MsSQLDBI: Map bit to Boolean. Write Boolean columns as true/false
MsSQLDBI: Map Postgres "jsonb" data type to JSON.
MsSQLDBI: Add Snowflake data type mappings
MsSQLDBI: Restrict STisValid test to Geography columns
MsSQLDBI: Use YadamuLibrary for Boolean Conversions
MsSQLParser: Use Object.values() to Pivot from Object to Array
MsSQLParser: Return binary data types as Buffer, not HexBinary String
MsSQLWriter: Convert GeoJSON to WKT before writing batch
OracleDBI: Add OracleConstants class
OracleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
OracleDBI: Map MySQL SET columns to JSON when generating DML statements
OracleDBI: Standardize LOB conversion functions using sourceToTarget naming convention
OracleDBI: Remove all HexBinary LOB conversion functions \
OracleDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
OracleDBI: Convert all LOB copy operations to Stream.pipeline()
OracleDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
OracleDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
OracleDBI: Wrap calls to fs.createReadStream() in a promise
OracleDBI: Export BFILE in JSON Format / Import BFILE from JSON Format
OracleDBI: Add Snowflake data type mappings
OracleDBI: Map raw(1) to Boolean. Write Boolean Values as 0x00 and 0x01
OracleDBI: Map Postgres "jsonb" data type to JSON.
OracleDBI: Use YadamuLibrary for Boolean Conversions
OracleDBI: Remove unused parameter TABLE_NAME from YADAMU_EXPORT procedure
OracleDBI: Switch Default JSON storage to BLOB in Oracle12c
OracleDBI: Retun JSON as CLOB or BLOB. Use JSON_SERIALIZE in 20c with Native JSON dataType.
OracleParser: Convert all LOB copy operations to Stream.pipeline()
OracleParser: Return binary data types as Buffer, not HexBinary String
OracleParser: Convert JSON store as BLOB to text.
OracleParser: Use transformation array for CLOB, BLOB and JSON conversions
OracleParser: Do not run transformations unless one or more transformation are defined
OracleWriter: Use await when serializing LOB columns before logging errors
OracleWriter: Put rows containing LOBs back into column order before logging errors
OracleWriter: Remove HexBinary conversions
OracleWriter: Use await when calling with HandleBatchError
OracleError: Implement "SpatialError" method
PostgresDBI: Add PostgreConstants class
PostgresDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
PostgresDBI: Return binary data types as Buffer, not HexBinary String
PostgresDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
PostgresDBI: Map MySQL Set data type to JSON
PostgresDBI: Serialize JSON/JSONB Columns to avoid brianc/node-postgres#442
PostgresDBI: Set rowMode to array on query exectuion
PostgresDBI: Wrap calls to fs.createReadStream() in a promise
SnowflakeDBI: Add SnowflakeConstants class
SnowflakeDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
SnowflakeDBI: Defined constants for all SQL statements used by the driver
SnowflakeDBI: Removed functions related to uploading and processing YADAMU files on the server
SnowflakeDBI: Optimize insert of variant types using FROM VALUES (),()
SnowflakeDBI: Fix PARSE_XML usage when copying from databases where XML datatype is "XML"
SnowflakeDBI: Added TIME_INPUT_FORMAT mask
SnowflakeDBI: Add support for specifying transient and DATA_RETENTION_TIME to create table statements
SnowflakeDBI: Added support for type discovery for columns of type 'USER_DEFINED_TYPE'
SnowflakeDBI: Refactor Spatial Conversion operations to new class YadamuSpatialLibary
SnowflakeDBI: Added support for GEOMETRY data type
SnowflakeDBI: Switched Default Spatial Format to EWKB
SnowflakeDBI: Use Describe to get length of Binary columns
SnowflakeDBI: Add Duck Typing for Variant columnss
SnowflakeParser: Use Object.values() to Pivot from Object to Array
SnowflakeWriter: Convert Buffers to HexBinary before inserting binary data (Snowflake-sdk does not handle Buffer)
SnowflakeWriter: Recode WKB and EWKB as WKT as snowflake rejects some valid WKB/EWKBT values
SnowflakeQA: Add transient and DATA_RETENTION_TIME to Database and Schema creation
SnowflakeQA: Added YADAMU_TEST stored procedure for comparing Snowflake schemas
SnowflakeQA: Added implementation for function compareSchemas()
SnowflakeQA: Added implementation for function getRowCounts()
MongoDBI: Add MongoConstants class
MongoDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MongoDBI: Add stack traces to MongoError
MongoError: Add stack trace information
MongoParser: Use Object.values() to Pivot from Object to Array
MongoParser: Use transformation array for data conversions
MongoParser: Do not run transformations unless one or more transformation are defined
MongoWriter: Use transformation array for Buffer and Date conversions
MongoWriter: Do not run transformations unless one or more transformation are defined
MongoWriter: Fixed objectId transformation
MongoParser: Decode Mongo binData BSON Objects
MongoQA: Report Collection Hash Values as Extra and Missing rows counts
LoaderDBI: Add Experimental version of parallel File load/unload option
Docker: Limit Container Memory to 16GB
… conversions between object and text, binary and hexBinary
Yadamu: Enable TABLES parameter to be used to limit operations to a specific subset of the tables in the schema
Yadamu: Added support for TABLES, WAREHOUSE and ACCOUNT command line parameters
Yadamu: Refactor DEFAULT handling and PARAMETERS as GETTERS
DBReader: pass cause to forcedEnd()
DBWriter: Use await when calling dbi.setMetadata()
YadamuLibrary: Add Boolean Conversion utilities
YadmauLogger: Disable fileWriter column count check
YadamuRejectManager: Disable fileWriter column count check
YadamuDBI: Standardized naming conventions for SQL Statements used by driver
SQL_CONFIGURE_CONNECTION
SQL_SYSTEM_INFORMATION_SCHEMA
SQL_GET_DLL_STATEMENTS
SQL_SCHEMA_INFORMATION
SQL_BEGIN_TRANSACTION
SQL_COMMIT_TRANSACTION
SQL_ROLLBACK_TRANSACTION
SQL_GET_DDL_STATEMENTS
SQL_CREATE_SAVE_POINT
SQL_RESTORE_SAVE_POINT
YadamuDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuDBI: Add incoming Spatial format information to table metadata in getTableInfo()
YadamuDBI: All drivers Set transaction state before performing commit and rollback operations
YadmuDBI: Remove forceEndOnInputStreamError()
YadamuDBI: Refactor decomposeDateType => YadamuLibrary
YadamuDBI: Refactor decomposeDateTypes => YadamuLibrary
YadamuDBI: Add support for table name filtering via TABLES parameter
YadamuParser: remove objectMode argument from constuctor and all descendant classes
YadamuParser: Use Object.values() to Pivot from Object to Array
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Pass cause from forcedEnd() to FlushCache() to rollbackTransaction()
YadamuWriter: Disable column count check once skipTable is true
YadamuWriter: FlushCache() Only commit or rollback if there is an active transaction
YadamuWriter: FlushCache() Skip writing pending rows if skipTable is true
YadamuQA: Refactor DEFAULT handling and PARAMETERS as GETTERS
YadamuQA: Standardize test names across export, import, fileRoundtrip, dbRoundtrip and lostConnection configurations
YadamuQA: Abort test operation when step fails.
YadamuQA: Enable integration of LoaderDBI by using dynamic driver loading to load FileDBI
YadamuQA: Fixed Accumulators
YadamuQA: Added Unload Testing Framework to Export
ExampleDBI: Add ExampleConstants class
ExampleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
ExampleParser: Use Object.values() to Pivot from Object to Array
ExampleParser: Use transformation array
ExampleParser: Do not run transformations unless one or more transformation are defined
FileDBI: Add DataType mapping to talbleInfo
FileDBI: Add SpatialFormat to tableInfo
FileDBI: Wrap calls to fs.createReadStream() in a promise
FileDBI: Remove source information from Metadata before writing to file
FileWriter: Use transformation array for Buffer and Date conversions
FileWriter: Do not run transformations unless one or more transformations are defined
MySQLDBI: Add MySQLConstants class
MySQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MySQLDBI: Remove JSON_ARARY operator from SELECT statements
MySQLDBI: Return binary data types as Buffer, not HexBinary String
MySQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Map SET columns to JSON when generating DML statements
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in "schemaInformation" query
MySQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MySQLDBI: Map tinyint(1) to boolean;
MySQLDBI: Map Postgres "jsonb" data type to JSON.
MySQLDBI: Add Snowflake data type mappings
MySQLParser: Use Object.values() to Pivot from Object to Array
MySQLParser: Use transformation array for JSON & SET conversions
MySQLParser: Return JSON as object
MySQLParser: Return SET column as JSON array
MySQLParser: Do not run transformations unless one or more transformation are defined
MySQLParser: SET columns are automatically converted to JSON by the driver, no transformation required
MySQLParser: Do not run transformations unless one or more transformation are defined
MySQLWriter: Use YadamuSpatialLibary to recode entire batch to WKT on WKB insert Error
MySQLWriter: Remove direct use of WKX package
MySQLQA: Cast SET columns in the source table to JSON when comparing results
MariaDBI: Add MariadbConstants class
MariaDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MariaDBI: Remove JSON_ARARY operator from SELECT statements
MariaDBI: Return binary data types as Buffer, not HexBinary String
MariaDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MariaDBI: Map SET columns to JSON when generating DML statements
MariaDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MariaDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MariaDBI: Use rowsAsArray option at connection time
MariaDBI: Join with information_schema.check_constraints to identify JSON columns
MariaDBI: Map tinyint(1) to boolean;
MariaDBI: Map Postgres "jsonb" data type to JSON.
MariaDBI: Fetch float and double as string
MariaParser: Use transformation array for JSON
MariaParser: Return JSON as object
MariaQA: Cast SET columns in the source table to Pseudo JSON when comparing results
MsSQLDBI: Add MsSQLConstants class
MsSQLDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MsSQLDBI: Map MySQL SET columns to JSON when generating DML statements
MsSQLDBI: Fixed parameters names used in reportTransactionState()'
MsSQLDBI: Fix mapping of Oracle data type "MDSYS.SDO_GEOMETRY"
MsSQLDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
MySQLDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
MsSQLDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
MsSQLDBI: Wrap calls to fs.createReadStream() in a promise
MsSQLDBI: Map bit to Boolean. Write Boolean columns as true/false
MsSQLDBI: Map Postgres "jsonb" data type to JSON.
MsSQLDBI: Add Snowflake data type mappings
MsSQLDBI: Restrict STisValid test to Geography columns
MsSQLDBI: Use YadamuLibrary for Boolean Conversions
MsSQLParser: Use Object.values() to Pivot from Object to Array
MsSQLParser: Return binary data types as Buffer, not HexBinary String
MsSQLWriter: Convert GeoJSON to WKT before writing batch
OracleDBI: Add OracleConstants class
OracleDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
OracleDBI: Map MySQL SET columns to JSON when generating DML statements
OracleDBI: Standardize LOB conversion functions using sourceToTarget naming convention
OracleDBI: Remove all HexBinary LOB conversion functions \
OracleDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
OracleDBI: Convert all LOB copy operations to Stream.pipeline()
OracleDBI: Rename column "OWNER" to "TABLE_SCHEMA" in schemaInformationQuery
OracleDBI: Standardize Key Names in "schemaInformation", "metadata", and "tableInfo" objects
OracleDBI: Wrap calls to fs.createReadStream() in a promise
OracleDBI: Export BFILE in JSON Format / Import BFILE from JSON Format
OracleDBI: Add Snowflake data type mappings
OracleDBI: Map raw(1) to Boolean. Write Boolean Values as 0x00 and 0x01
OracleDBI: Map Postgres "jsonb" data type to JSON.
OracleDBI: Use YadamuLibrary for Boolean Conversions
OracleDBI: Remove unused parameter TABLE_NAME from YADAMU_EXPORT procedure
OracleDBI: Switch Default JSON storage to BLOB in Oracle12c
OracleDBI: Retun JSON as CLOB or BLOB. Use JSON_SERIALIZE in 20c with Native JSON dataType.
OracleParser: Convert all LOB copy operations to Stream.pipeline()
OracleParser: Return binary data types as Buffer, not HexBinary String
OracleParser: Convert JSON store as BLOB to text.
OracleParser: Use transformation array for CLOB, BLOB and JSON conversions
OracleParser: Do not run transformations unless one or more transformation are defined
OracleWriter: Use await when serializing LOB columns before logging errors
OracleWriter: Put rows containing LOBs back into column order before logging errors
OracleWriter: Remove HexBinary conversions
OracleWriter: Use await when calling with HandleBatchError
OracleError: Implement "SpatialError" method
PostgresDBI: Add PostgreConstants class
PostgresDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
PostgresDBI: Return binary data types as Buffer, not HexBinary String
PostgresDBI: Remove all encoding/decoding of binary data as HEX from generated SQL
PostgresDBI: Map MySQL Set data type to JSON
PostgresDBI: Serialize JSON/JSONB Columns to avoid brianc/node-postgres#442
PostgresDBI: Set rowMode to array on query exectuion
PostgresDBI: Wrap calls to fs.createReadStream() in a promise
SnowflakeDBI: Add SnowflakeConstants class
SnowflakeDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
SnowflakeDBI: Defined constants for all SQL statements used by the driver
SnowflakeDBI: Removed functions related to uploading and processing YADAMU files on the server
SnowflakeDBI: Optimize insert of variant types using FROM VALUES (),()
SnowflakeDBI: Fix PARSE_XML usage when copying from databases where XML datatype is "XML"
SnowflakeDBI: Added TIME_INPUT_FORMAT mask
SnowflakeDBI: Add support for specifying transient and DATA_RETENTION_TIME to create table statements
SnowflakeDBI: Added support for type discovery for columns of type 'USER_DEFINED_TYPE'
SnowflakeDBI: Refactor Spatial Conversion operations to new class YadamuSpatialLibary
SnowflakeDBI: Added support for GEOMETRY data type
SnowflakeDBI: Switched Default Spatial Format to EWKB
SnowflakeDBI: Use Describe to get length of Binary columns
SnowflakeDBI: Add Duck Typing for Variant columnss
SnowflakeParser: Use Object.values() to Pivot from Object to Array
SnowflakeWriter: Convert Buffers to HexBinary before inserting binary data (Snowflake-sdk does not handle Buffer)
SnowflakeWriter: Recode WKB and EWKB as WKT as snowflake rejects some valid WKB/EWKBT values
SnowflakeQA: Add transient and DATA_RETENTION_TIME to Database and Schema creation
SnowflakeQA: Added YADAMU_TEST stored procedure for comparing Snowflake schemas
SnowflakeQA: Added implementation for function compareSchemas()
SnowflakeQA: Added implementation for function getRowCounts()
MongoDBI: Add MongoConstants class
MongoDBI: Refactor DEFAULT handling and PARAMETERS as GETTERS
MongoDBI: Add stack traces to MongoError
MongoError: Add stack trace information
MongoParser: Use Object.values() to Pivot from Object to Array
MongoParser: Use transformation array for data conversions
MongoParser: Do not run transformations unless one or more transformation are defined
MongoWriter: Use transformation array for Buffer and Date conversions
MongoWriter: Do not run transformations unless one or more transformation are defined
MongoWriter: Fixed objectId transformation
MongoParser: Decode Mongo binData BSON Objects
MongoQA: Report Collection Hash Values as Extra and Missing rows counts
LoaderDBI: Add Experimental version of parallel File load/unload option
Docker: Limit Container Memory to 16GB
Here's an example of using a json data type & doing a round-trip query from node -> postgres -> and back to node with the json type preserved:
https://github.com/brianc/node-postgres/blob/master/test/integration/client/json-type-parsing-tests.js
Hope that helps!
I found the file @brianc referred to, since the link is now broken: https://github.com/brianc/node-postgres/blob/9274f08fa2d8ae55a218255bf7880d26b6abc935/test/integration/client/json-type-parsing-tests.js
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data`
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442
This is a **BREAKING** change that:
- adds tests against the [upstream `sharedb` DB test suite][1]
- adds a CI build for the tests against current Node.js and Postgres
versions
- breaks the API to conform to the upstream tests, including adding
metadata support
The breaks are:
- Dropping non-null constraints on `snapshots.doc_type` and
`snapshots.data` (to allow `Doc`s to be deleted)
- Adding a new `snapshots.metadata` `json` column
- Respecting `options.metadata` and `fields.$submit`, which were
previously ignored on `getOps()`, and useless on `getSnapshot()`
(which didn't store metadata)
- `snapshot.m` is now `undefined` if not present, or `null` if
unrequested (inline with the spec)
On top of this it also makes some bugfixes to conform to the spec:
- Ignore unique key validations when committing, since this may happen
during concurrent commits
- `JSON.stringify()` JSON fields, which [break][2] if passed a raw
array
- Default `from = 0` if unset in `getOps()`
[1]: https://github.com/share/sharedb/blob/7abe65049add9b58e1df638aa34e7ca2c0a1fcfa/test/db.js#L25
[2]: brianc/node-postgres#442