Involved Source Filesbatch.goconn.gocopy_from.goderived_types.go Package pgx is a PostgreSQL database driver.
pgx provides a native PostgreSQL driver and can act as a database/sql driver. The native PostgreSQL interface is similar
to the database/sql interface while providing better speed and access to PostgreSQL specific features. Use
github.com/jackc/pgx/v5/stdlib to use pgx as a database/sql compatible driver. See that package's documentation for
details.
Establishing a Connection
The primary way of establishing a connection is with [pgx.Connect]:
conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
The database connection string can be in URL or key/value format. Both PostgreSQL settings and pgx settings can be
specified here. In addition, a config struct can be created by [ParseConfig] and modified before establishing the
connection with [ConnectConfig] to configure settings such as tracing that cannot be configured with a connection
string.
Connection Pool
[*pgx.Conn] represents a single connection to the database and is not concurrency safe. Use package
github.com/jackc/pgx/v5/pgxpool for a concurrency safe connection pool.
Query Interface
pgx implements Query in the familiar database/sql style. However, pgx provides generic functions such as CollectRows and
ForEachRow that are a simpler and safer way of processing rows than manually calling defer rows.Close(), rows.Next(),
rows.Scan, and rows.Err().
CollectRows can be used collect all returned rows into a slice.
rows, _ := conn.Query(context.Background(), "select generate_series(1,$1)", 5)
numbers, err := pgx.CollectRows(rows, pgx.RowTo[int32])
if err != nil {
return err
}
// numbers => [1 2 3 4 5]
ForEachRow can be used to execute a callback function for every row. This is often easier than iterating over rows
directly.
var sum, n int32
rows, _ := conn.Query(context.Background(), "select generate_series(1,$1)", 10)
_, err := pgx.ForEachRow(rows, []any{&n}, func() error {
sum += n
return nil
})
if err != nil {
return err
}
pgx also implements QueryRow in the same style as database/sql.
var name string
var weight int64
err := conn.QueryRow(context.Background(), "select name, weight from widgets where id=$1", 42).Scan(&name, &weight)
if err != nil {
return err
}
Use Exec to execute a query that does not return a result set.
commandTag, err := conn.Exec(context.Background(), "delete from widgets where id=$1", 42)
if err != nil {
return err
}
if commandTag.RowsAffected() != 1 {
return errors.New("No row found to delete")
}
PostgreSQL Data Types
pgx uses the pgtype package to converting Go values to and from PostgreSQL values. It supports many PostgreSQL types
directly and is customizable and extendable. User defined data types such as enums, domains, and composite types may
require type registration. See that package's documentation for details.
Transactions
Transactions are started by calling Begin.
tx, err := conn.Begin(context.Background())
if err != nil {
return err
}
// Rollback is safe to call even if the tx is already closed, so if
// the tx commits successfully, this is a no-op
defer tx.Rollback(context.Background())
_, err = tx.Exec(context.Background(), "insert into foo(id) values (1)")
if err != nil {
return err
}
err = tx.Commit(context.Background())
if err != nil {
return err
}
The Tx returned from Begin also implements the Begin method. This can be used to implement pseudo nested transactions.
These are internally implemented with savepoints.
Use BeginTx to control the transaction mode. BeginTx also can be used to ensure a new transaction is created instead of
a pseudo nested transaction.
BeginFunc and BeginTxFunc are functions that begin a transaction, execute a function, and commit or rollback the
transaction depending on the return value of the function. These can be simpler and less error prone to use.
err = pgx.BeginFunc(context.Background(), conn, func(tx pgx.Tx) error {
_, err := tx.Exec(context.Background(), "insert into foo(id) values (1)")
return err
})
if err != nil {
return err
}
Prepared Statements
Prepared statements can be manually created with the Prepare method. However, this is rarely necessary because pgx
includes an automatic statement cache by default. Queries run through the normal Query, QueryRow, and Exec functions are
automatically prepared on first execution and the prepared statement is reused on subsequent executions. See ParseConfig
for information on how to customize or disable the statement cache.
Copy Protocol
Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a
CopyFromSource interface. If the data is already in a [][]any use CopyFromRows to wrap it in a CopyFromSource interface.
Or implement CopyFromSource to avoid buffering the entire data set in memory.
rows := [][]any{
{"John", "Smith", int32(36)},
{"Jane", "Doe", int32(29)},
}
copyCount, err := conn.CopyFrom(
context.Background(),
pgx.Identifier{"people"},
[]string{"first_name", "last_name", "age"},
pgx.CopyFromRows(rows),
)
When you already have a typed array using CopyFromSlice can be more convenient.
rows := []User{
{"John", "Smith", 36},
{"Jane", "Doe", 29},
}
copyCount, err := conn.CopyFrom(
context.Background(),
pgx.Identifier{"people"},
[]string{"first_name", "last_name", "age"},
pgx.CopyFromSlice(len(rows), func(i int) ([]any, error) {
return []any{rows[i].FirstName, rows[i].LastName, rows[i].Age}, nil
}),
)
CopyFrom can be faster than an insert with as few as 5 rows.
Listen and Notify
pgx can listen to the PostgreSQL notification system with the `Conn.WaitForNotification` method. It blocks until a
notification is received or the context is canceled.
_, err := conn.Exec(context.Background(), "listen channelname")
if err != nil {
return err
}
notification, err := conn.WaitForNotification(context.Background())
if err != nil {
return err
}
// do something with notification
Tracing and Logging
pgx supports tracing by setting ConnConfig.Tracer. To combine several tracers you can use the multitracer.Tracer.
In addition, the tracelog package provides the TraceLog type which lets a traditional logger act as a Tracer.
For debug tracing of the actual PostgreSQL wire protocol messages see github.com/jackc/pgx/v5/pgproto3.
Lower Level PostgreSQL Functionality
github.com/jackc/pgx/v5/pgconn contains a lower level PostgreSQL driver roughly at the level of libpq. pgx.Conn is
implemented on top of pgconn. The Conn.PgConn() method can be used to access this lower layer.
PgBouncer
By default pgx automatically uses prepared statements. Prepared statements are incompatible with PgBouncer. This can be
disabled by setting a different QueryExecMode in ConnConfig.DefaultQueryExecMode.extended_query_builder.golarge_objects.gonamed_args.gorows.gotracer.gotx.govalues.go
Package-Level Type Names (total 69, in which 46 are exported)
Close closes the batch operation. All unread results are read and any callback functions registered with
QueuedQuery.Query, QueuedQuery.QueryRow, or QueuedQuery.Exec will be called. If a callback function returns an
error or the batch encounters an error subsequent callback functions will not be called.
For simple batch inserts inside a transaction or similar queries, it's sufficient to not set any callbacks,
and just handle the return value of Close.
Close must be called before the underlying connection can be used again. Any error that occurred during a batch
operation may have made it impossible to resyncronize the connection with the server. In this case the underlying
connection will have been closed.
Close is safe to call multiple times. If it returns an error subsequent calls will return the same error. Callback
functions will not be rerun. Exec reads the results from the next query in the batch as if the query has been sent with Conn.Exec. Prefer
calling Exec on the QueuedQuery, or just calling Close. Query reads the results from the next query in the batch as if the query has been sent with Conn.Query. Prefer
calling Query on the QueuedQuery. QueryRow reads the results from the next query in the batch as if the query has been sent with Conn.QueryRow.
Prefer calling QueryRow on the QueuedQuery.
*batchResults
*emptyBatchResults
*pipelineBatchResults
github.com/jackc/pgx/v5/pgxpool.errBatchResults
*github.com/jackc/pgx/v5/pgxpool.poolBatchResults
go.pact.im/x/pgxprocess.errBatchResults
BatchResults : io.Closer
func (*Conn).SendBatch(ctx context.Context, b *Batch) (br BatchResults)
func Tx.SendBatch(ctx context.Context, b *Batch) BatchResults
func github.com/jackc/pgx/v5/pgxpool.(*Conn).SendBatch(ctx context.Context, b *Batch) BatchResults
func github.com/jackc/pgx/v5/pgxpool.(*Pool).SendBatch(ctx context.Context, b *Batch) BatchResults
func github.com/jackc/pgx/v5/pgxpool.(*Tx).SendBatch(ctx context.Context, b *Batch) BatchResults
ConnConfig contains all the options used to establish a connection. It must be created by ParseConfig and
then it can be modified. A manually initialized ConnConfig will cause ConnectConfig to panic.Configpgconn.Config AfterConnect is called after ValidateConnect. It can be used to set up the connection (e.g. Set session variables
or prepare statements). If this returns an error the connection attempt fails. AfterNetConnect is called after the network connection, including TLS if applicable, is established but before any
PostgreSQL protocol communication. It takes the established net.Conn and returns a net.Conn that will be used in
its place. It can be used to wrap the net.Conn (e.g. for logging, diagnostics, or testing). Its functionality has
some overlap with DialFunc. However, DialFunc takes place before TLS is established and cannot be used to control
the final net.Conn used for PostgreSQL protocol communication while AfterNetConnect can. BuildContextWatcherHandler is called to create a ContextWatcherHandler for a connection. The handler is called
when a context passed to a PgConn method is canceled.Config.BuildFrontendpgconn.BuildFrontendFunc ChannelBinding is the channel_binding parameter for SCRAM-SHA-256-PLUS authentication.
Valid values: "disable", "prefer", "require". Defaults to "prefer".Config.ConnectTimeouttime.DurationConfig.Databasestring // e.g. net.Dialer.DialContextConfig.Fallbacks[]*pgconn.FallbackConfig // host (e.g. localhost) or absolute path to unix domain socket directory (e.g. /private/tmp)Config.KerberosSpnstringConfig.KerberosSrvNamestring // e.g. net.Resolver.LookupHost MaxProtocolVersion is the maximum PostgreSQL protocol version to request from the server.
Valid values: "3.0", "3.2", "latest". Defaults to "3.0" for compatibility. MinProtocolVersion is the minimum acceptable PostgreSQL protocol version.
If the server does not support at least this version, the connection will fail.
Valid values: "3.0", "3.2", "latest". Defaults to "3.0". OAuthTokenProvider is a function that returns an OAuth token for authentication. If set, it will be used for
OAUTHBEARER SASL authentication when the server requests it. OnNotice is a callback function called when a notice response is received. OnNotification is a callback function called when a notification from the LISTEN/NOTIFY system is received. OnPgError is a callback function called when a Postgres error is received by the server. The default handler will close
the connection on any FATAL errors. If you override this handler you should call the previously set handler or ensure
that you close on FATAL errors by returning false.Config.PasswordstringConfig.Portuint16 // Run-time parameters to set on connection as session default values (e.g. search_path or application_name) // sslnegotiation=postgres or sslnegotiation=direct // nil disables TLSConfig.Userstring ValidateConnect is called during a connection attempt after a successful authentication with the PostgreSQL server.
It can be used to validate that the server is acceptable. If this returns an error the connection is closed and the next
fallback config is tried. This allows implementing high availability behavior such as libpq does with target_session_attrs. DefaultQueryExecMode controls the default mode for executing queries. By default pgx uses the extended protocol
and automatically prepares and caches prepared statements. However, this may be incompatible with proxies such as
PGBouncer. In this case it may be preferable to use QueryExecModeExec or QueryExecModeSimpleProtocol. The same
functionality can be controlled on a per query basis by passing a QueryExecMode as the first query argument. DescriptionCacheCapacity is the maximum size of the description cache used when executing a query with
"cache_describe" query exec mode. StatementCacheCapacity is maximum size of the statement cache used when executing a query with "cache_statement"
query exec mode.TracerQueryTracer // Used to enforce created by ParseConfig rule. Original connection string that was parsed into config. // Used to enforce created by ParseConfig rule. ConnString returns the connection string as parsed by pgx.ParseConfig into pgx.ConnConfig. Copy returns a deep copy of the config that is safe to use and modify.
The only exception is the tls.Config:
according to the tls.Config docs it must not be modified after creation.
func ParseConfig(connString string) (*ConnConfig, error)
func ParseConfigWithOptions(connString string, options ParseConfigOptions) (*ConnConfig, error)
func (*Conn).Config() *ConnConfig
func (*ConnConfig).Copy() *ConnConfig
func ConnectConfig(ctx context.Context, connConfig *ConnConfig) (*Conn, error)
func connect(ctx context.Context, config *ConnConfig) (c *Conn, err error)
ConnectTracer traces Connect and ConnectConfig.( ConnectTracer) TraceConnectEnd(ctx context.Context, data TraceConnectEndData) TraceConnectStart is called at the beginning of Connect and ConnectConfig calls. The returned context is used for
the rest of the call and will be passed to TraceConnectEnd.
*github.com/jackc/pgx/v5/tracelog.TraceLog
CopyFromTracer traces CopyFrom.( CopyFromTracer) TraceCopyFromEnd(ctx context.Context, conn *Conn, data TraceCopyFromEndData) TraceCopyFromStart is called at the beginning of CopyFrom calls. The returned context is used for the
rest of the call and will be passed to TraceCopyFromEnd.
*github.com/jackc/pgx/v5/tracelog.TraceLog
ErrPreprocessingBatch occurs when an error is encountered while preprocessing a batch.
The two preprocessing steps are "prepare" (server-side SQL parse/plan) and
"build" (client-side argument encoding).errerrorsqlstring // "prepare" or "build"( ErrPreprocessingBatch) Error() string( ErrPreprocessingBatch) SQL() string( ErrPreprocessingBatch) Unwrap() error
ErrPreprocessingBatch : error
func newErrPreprocessingBatch(step, sql string, err error) ErrPreprocessingBatch
ExtendedQueryBuilder is used to choose the parameter formats, to format the parameters and to choose the result
formats for an extended query.ParamFormats[]int16ParamValues[][]byteResultFormats[]int16paramValueBytes[]byte Build sets ParamValues, ParamFormats, and ResultFormats for use with *PgConn.ExecParams or *PgConn.ExecPrepared. If
sd is nil then QueryExecModeExec behavior will be used. appendParam appends a parameter to the query. format may be -1 to automatically choose the format. If arg is nil it
must be an untyped nil. appendResultFormat appends a result format to the query. chooseParameterFormatCode determines the correct format code for an
argument to a prepared statement. It defaults to TextFormatCode if no
determination can be made.(*ExtendedQueryBuilder) encodeExtendedParamValue(m *pgtype.Map, oid uint32, formatCode int16, arg any) ([]byte, error) reset readies eqb to build another query.
A LargeObject is a large object stored on the server. It is only valid within the transaction that it was initialized
in. It uses the context it was initialized with for all operations. It implements these interfaces:
io.Writer
io.Reader
io.Seeker
io.Closerctxcontext.Contextfdint32txTx Close the large object descriptor. Read reads up to len(p) bytes into p returning the number of bytes read. Seek moves the current location pointer to the new location specified by offset. Tell returns the current read or write location of the large object descriptor. Truncate the large object to size. Write writes p to the large object and returns the number of bytes written and an error if not all of p was written.
*LargeObject : internal/bisect.Writer
*LargeObject : io.Closer
*LargeObject : io.ReadCloser
*LargeObject : io.Reader
*LargeObject : io.ReadSeekCloser
*LargeObject : io.ReadSeeker
*LargeObject : io.ReadWriteCloser
*LargeObject : io.ReadWriter
*LargeObject : io.ReadWriteSeeker
*LargeObject : io.Seeker
*LargeObject : io.WriteCloser
*LargeObject : io.Writer
*LargeObject : io.WriteSeeker
*LargeObject : crypto/tls.transcriptHash
func (*LargeObjects).Open(ctx context.Context, oid uint32, mode LargeObjectMode) (*LargeObject, error)
LargeObjects is a structure used to access the large objects API. It is only valid within the transaction where it
was created.
For more details see: http://www.postgresql.org/docs/current/static/largeobjects.htmltxTx Create creates a new large object. If oid is zero, the server assigns an unused OID. Open opens an existing large object with the given mode. ctx will also be used for all operations on the opened large
object. Unlink removes a large object from the database.
func Tx.LargeObjects() LargeObjects
func github.com/jackc/pgx/v5/pgxpool.(*Tx).LargeObjects() LargeObjects
NamedArgs can be used as the first argument to a query method. It will replace every '@' named placeholder with a '$'
ordinal placeholder and construct the appropriate arguments.
For example, the following two queries are equivalent:
conn.Query(ctx, "select * from widgets where foo = @foo and bar = @bar", pgx.NamedArgs{"foo": 1, "bar": 2})
conn.Query(ctx, "select * from widgets where foo = $1 and bar = $2", 1, 2)
Named placeholders are case sensitive and must start with a letter or underscore. Subsequent characters can be
letters, numbers, or underscores. RewriteQuery implements the QueryRewriter interface.
NamedArgs : QueryRewriter
PrepareTracer traces Prepare.( PrepareTracer) TracePrepareEnd(ctx context.Context, conn *Conn, data TracePrepareEndData) TracePrepareStart is called at the beginning of Prepare calls. The returned context is used for the
rest of the call and will be passed to TracePrepareEnd.
*github.com/jackc/pgx/v5/tracelog.TraceLog
QueryTracer traces Query, QueryRow, and Exec.( QueryTracer) TraceQueryEnd(ctx context.Context, conn *Conn, data TraceQueryEndData) TraceQueryStart is called at the beginning of Query, QueryRow, and Exec calls. The returned context is used for the
rest of the call and will be passed to TraceQueryEnd.
*github.com/jackc/pgx/v5/tracelog.TraceLog
QueuedQuery is a query that has been queued for execution via a Batch.Arguments[]anyFnbatchItemFuncSQLstringsd*pgconn.StatementDescription Exec sets fn to be called when the response to qq is received.
Note: for simple batch insert uses where it is not required to handle
each potential error individually, it's sufficient to not set any callbacks,
and just handle the return value of BatchResults.Close. Query sets fn to be called when the response to qq is received. Query sets fn to be called when the response to qq is received.
func (*Batch).Queue(query string, arguments ...any) *QueuedQuery
Row is a convenience wrapper over Rows that is returned by QueryRow.
Row is an interface instead of a struct to allow tests to mock QueryRow. However,
adding a method to an interface is technically a breaking change. Because of this
the Row interface is partially excluded from semantic version requirements.
Methods will not be removed or changed, but new methods may be added. Scan works the same as Rows. with the following exceptions. If no
rows were found it returns ErrNoRows. If multiple rows are returned it
ignores all but the first.CollectableRow(interface)Rows(interface)
*database/sql.Row
*database/sql.Rows
*baseRows
*connRow
github.com/jackc/pgx/v5/pgxpool.errRow
github.com/jackc/pgx/v5/pgxpool.errRows
*github.com/jackc/pgx/v5/pgxpool.poolRow
*github.com/jackc/pgx/v5/pgxpool.poolRows
go.pact.im/x/pgxprocess.errRow
go.pact.im/x/pgxprocess.errRows
func BatchResults.QueryRow() Row
func (*Conn).QueryRow(ctx context.Context, sql string, args ...any) Row
func Tx.QueryRow(ctx context.Context, sql string, args ...any) Row
func github.com/jackc/pgx/v5/pgxpool.(*Conn).QueryRow(ctx context.Context, sql string, args ...any) Row
func github.com/jackc/pgx/v5/pgxpool.(*Pool).QueryRow(ctx context.Context, sql string, args ...any) Row
func github.com/jackc/pgx/v5/pgxpool.(*Tx).QueryRow(ctx context.Context, sql string, args ...any) Row
func github.com/jackc/pgx/v5/pgxpool.(*Conn).getPoolRow(r Row) *pgxpool.poolRow
Rows is the result set returned from *Conn.Query. Rows must be closed before
the *Conn can be used again. Rows are closed by explicitly calling Close(),
calling Next() until it returns false, or when a fatal error occurs.
Once a Rows is closed the only methods that may be called are Close(), Err(),
and CommandTag().
Rows is an interface instead of a struct to allow tests to mock Query. However,
adding a method to an interface is technically a breaking change. Because of this
the Rows interface is partially excluded from semantic version requirements.
Methods will not be removed or changed, but new methods may be added. Close closes the rows, making the connection ready for use again. It is safe
to call Close after rows is already closed. CommandTag returns the command tag from this query. It is only available after Rows is closed. Conn returns the underlying *Conn on which the query was executed. This may return nil if Rows did not come from a
*Conn (e.g. if it was created by RowsFromResultReader) Err returns any error that occurred while executing a query or reading its results. Err must be called after the
Rows is closed (either by calling Close or by Next returning false) to check if the query was successful. If it is
called before the Rows is closed it may return nil even if the query failed on the server. FieldDescriptions returns the field descriptions of the columns. It may return nil. In particular this can occur
when there was an error executing the query. Next prepares the next row for reading. It returns true if there is another row and false if no more rows are
available or a fatal error has occurred. It automatically closes rows upon returning false (whether due to all rows
having been read or due to an error).
Callers should check rows.Err() after rows.Next() returns false to detect whether result-set reading ended
prematurely due to an error. See Conn.Query for details.
For simpler error handling, consider using the higher-level pgx v5 CollectRows() and ForEachRow() helpers instead. RawValues returns the unparsed bytes of the row values. The returned data is only valid until the next Next
call or the Rows is closed. Scan reads the values from the current row into dest values positionally. dest can include pointers to core types,
values implementing the Scanner interface, and nil. nil will skip the value entirely. It is an error to call Scan
without first calling Next() and checking that it returned true. Rows is automatically closed upon error. Values returns the decoded row values. As with Scan(), it is an error to
call Values without first calling Next() and checking that it returned
true.
*baseRows
github.com/jackc/pgx/v5/pgxpool.errRows
*github.com/jackc/pgx/v5/pgxpool.poolRows
go.pact.im/x/pgxprocess.errRows
Rows : CollectableRow
Rows : CopyFromSource
Rows : Row
func RowsFromResultReader(typeMap *pgtype.Map, resultReader *pgconn.ResultReader) Rows
func BatchResults.Query() (Rows, error)
func (*Conn).Query(ctx context.Context, sql string, args ...any) (Rows, error)
func Tx.Query(ctx context.Context, sql string, args ...any) (Rows, error)
func github.com/jackc/pgx/v5/pgxpool.(*Conn).Query(ctx context.Context, sql string, args ...any) (Rows, error)
func github.com/jackc/pgx/v5/pgxpool.(*Pool).Query(ctx context.Context, sql string, args ...any) (Rows, error)
func github.com/jackc/pgx/v5/pgxpool.(*Tx).Query(ctx context.Context, sql string, args ...any) (Rows, error)
func AppendRows[T, S](slice S, rows Rows, fn RowToFunc[T]) (S, error)
func CollectExactlyOneRow[T](rows Rows, fn RowToFunc[T]) (T, error)
func CollectOneRow[T](rows Rows, fn RowToFunc[T]) (T, error)
func CollectRows[T](rows Rows, fn RowToFunc[T]) ([]T, error)
func ForEachRow(rows Rows, scans []any, fn func() error) (pgconn.CommandTag, error)
func RowScanner.ScanRow(rows Rows) error
func github.com/jackc/pgx/v5/pgxpool.(*Conn).getPoolRows(r Rows) *pgxpool.poolRows
RowScanner scans an entire row at a time into the RowScanner. ScanRows scans the row.
*mapRowScanner
StrictNamedArgs can be used in the same way as NamedArgs, but provided arguments are also checked to include all
named arguments that the sql query uses, and no extra arguments. RewriteQuery implements the QueryRewriter interface.
StrictNamedArgs : QueryRewriter
Tx represents a database transaction.
Tx is an interface instead of a struct to enable connection pools to be implemented without relying on internal pgx
state, to support pseudo-nested transactions with savepoints, and to allow tests to mock transactions. However,
adding a method to an interface is technically a breaking change. If new methods are added to Conn it may be
desirable to add them to Tx as well. Because of this the Tx interface is partially excluded from semantic version
requirements. Methods will not be removed or changed, but new methods may be added. Begin starts a pseudo nested transaction. Commit commits the transaction if this is a real transaction or releases the savepoint if this is a pseudo nested
transaction. Commit will return an error where errors.Is(ErrTxClosed) is true if the Tx is already closed, but is
otherwise safe to call multiple times. If the commit fails with a rollback status (e.g. the transaction was already
in a broken state) then an error where errors.Is(ErrTxCommitRollback) is true will be returned. Conn returns the underlying *Conn that on which this transaction is executing.( Tx) CopyFrom(ctx context.Context, tableName Identifier, columnNames []string, rowSrc CopyFromSource) (int64, error)( Tx) Exec(ctx context.Context, sql string, arguments ...any) (commandTag pgconn.CommandTag, err error)( Tx) LargeObjects() LargeObjects( Tx) Prepare(ctx context.Context, name, sql string) (*pgconn.StatementDescription, error)( Tx) Query(ctx context.Context, sql string, args ...any) (Rows, error)( Tx) QueryRow(ctx context.Context, sql string, args ...any) Row Rollback rolls back the transaction if this is a real transaction or rolls back to the savepoint if this is a
pseudo nested transaction. Rollback will return an error where errors.Is(ErrTxClosed) is true if the Tx is already
closed, but is otherwise safe to call multiple times. Hence, a defer tx.Rollback() is safe even if tx.Commit() will
be called first in a non-error condition. Any other failure of a real transaction will result in the connection
being closed.( Tx) SendBatch(ctx context.Context, b *Batch) BatchResults
*github.com/jackc/pgx/v5/pgxpool.Tx
*dbSimulatedNestedTx
*dbTx
func (*Conn).Begin(ctx context.Context) (Tx, error)
func (*Conn).BeginTx(ctx context.Context, txOptions TxOptions) (Tx, error)
func Tx.Begin(ctx context.Context) (Tx, error)
func github.com/jackc/pgx/v5/pgxpool.(*Conn).Begin(ctx context.Context) (Tx, error)
func github.com/jackc/pgx/v5/pgxpool.(*Conn).BeginTx(ctx context.Context, txOptions TxOptions) (Tx, error)
func github.com/jackc/pgx/v5/pgxpool.(*Pool).Begin(ctx context.Context) (Tx, error)
func github.com/jackc/pgx/v5/pgxpool.(*Pool).BeginTx(ctx context.Context, txOptions TxOptions) (Tx, error)
func github.com/jackc/pgx/v5/pgxpool.(*Tx).Begin(ctx context.Context) (Tx, error)
func beginFuncExec(ctx context.Context, tx Tx, fn func(Tx) error) (err error)
TxAccessMode is the transaction access mode (read write or read only)
const ReadOnly
const ReadWrite
TxDeferrableMode is the transaction deferrable mode (deferrable or not deferrable)
const Deferrable
const NotDeferrable
dbSimulatedNestedTx represents a simulated nested transaction implemented by a savepoint.closedboolsavepointNumint64txTx Begin starts a pseudo nested transaction implemented with a savepoint. Commit releases the savepoint essentially committing the pseudo nested transaction.(*dbSimulatedNestedTx) Conn() *Conn CopyFrom delegates to the underlying *Conn Exec delegates to the underlying Tx(*dbSimulatedNestedTx) LargeObjects() LargeObjects Prepare delegates to the underlying Tx Query delegates to the underlying Tx QueryRow delegates to the underlying Tx Rollback rolls back to the savepoint essentially rolling back the pseudo nested transaction. Rollback will return
ErrTxClosed if the dbSavepoint is already closed, but is otherwise safe to call multiple times. Hence, a defer sp.Rollback()
is safe even if sp.Commit() will be called first in a non-error condition. SendBatch delegates to the underlying *Conn
*dbSimulatedNestedTx : Tx
dbTx represents a database transaction.
All dbTx methods return ErrTxClosed if Commit or Rollback has already been
called on the dbTx.closedboolcommitQuerystringconn*ConnsavepointNumint64 Begin starts a pseudo nested transaction implemented with a savepoint. Commit commits the transaction.(*dbTx) Conn() *Conn CopyFrom delegates to the underlying *Conn Exec delegates to the underlying *Conn LargeObjects returns a LargeObjects instance for the transaction. Prepare delegates to the underlying *Conn Query delegates to the underlying *Conn QueryRow delegates to the underlying *Conn Rollback rolls back the transaction. Rollback will return ErrTxClosed if the
Tx is already closed, but is otherwise safe to call multiple times. Hence, a
defer tx.Rollback() is safe even if tx.Commit() will be called first in a
non-error condition. SendBatch delegates to the underlying *Conn
*dbTx : Tx
closedboolconn*Conn Close closes the batch operation. Any error that occurred during a batch operation may have made it impossible to
resyncronize the connection with the server. In this case the underlying connection will have been closed. Exec reads the results from the next query in the batch as if the query has been sent with Exec. Query reads the results from the next query in the batch as if the query has been sent with Query. QueryRow reads the results from the next query in the batch as if the query has been sent with QueryRow.
*emptyBatchResults : BatchResults
*emptyBatchResults : io.Closer
Package-Level Functions (total 53, in which 26 are exported)
Type Parameters:
T: any
S: ~[]T
AppendRows iterates through rows, calling fn for each row, and appending the results into a slice of T.
This function closes the rows automatically on return.
BeginFunc calls Begin on db and then calls fn. If fn does not return an error then it calls Commit on db. If fn
returns an error it calls Rollback on db. The context will be used when executing the transaction control statements
(BEGIN, ROLLBACK, and COMMIT) but does not otherwise affect the execution of fn.
BeginTxFunc calls BeginTx on db and then calls fn. If fn does not return an error then it calls Commit on db. If fn
returns an error it calls Rollback on db. The context will be used when executing the transaction control statements
(BEGIN, ROLLBACK, and COMMIT) but does not otherwise affect the execution of fn.
Type Parameters:
T: any CollectExactlyOneRow calls fn for the first row in rows and returns the result.
- If no rows are found returns an error where errors.Is(ErrNoRows) is true.
- If more than 1 row is found returns an error where errors.Is(ErrTooManyRows) is true.
This function closes the rows automatically on return.
Type Parameters:
T: any CollectOneRow calls fn for the first row in rows and returns the result. If no rows are found returns an error where errors.Is(ErrNoRows) is true.
CollectOneRow is to CollectRows as QueryRow is to Query.
This function closes the rows automatically on return.
Type Parameters:
T: any CollectRows iterates through rows, calling fn for each row, and collecting the results into a slice of T.
This function closes the rows automatically on return.
Connect establishes a connection with a PostgreSQL server with a connection string. See
pgconn.Connect for details.
ConnectConfig establishes a connection with a PostgreSQL server with a configuration struct.
connConfig must have been created by ParseConfig.
ConnectWithOptions behaves exactly like Connect with the addition of options. At the present options is only used to
provide a GetSSLPassword function.
CopyFromFunc returns a CopyFromSource interface that relies on nxtf for values.
nxtf returns rows until it either signals an 'end of data' by returning row=nil and err=nil,
or it returns an error. If nxtf returns an error, the copy is aborted.
CopyFromRows returns a CopyFromSource interface over the provided rows slice
making it usable by *Conn.CopyFrom.
CopyFromSlice returns a CopyFromSource interface over a dynamic func
making it usable by *Conn.CopyFrom.
ForEachRow iterates through rows. For each row it scans into the elements of scans and calls fn. If any row
fails to scan or fn returns an error the query will be aborted and the error will be returned. Rows will be closed
when ForEachRow returns.
ParseConfig creates a ConnConfig from a connection string. ParseConfig handles all options that [pgconn.ParseConfig]
does. In addition, it accepts the following options:
- default_query_exec_mode.
Possible values: "cache_statement", "cache_describe", "describe_exec", "exec", and "simple_protocol". See
QueryExecMode constant documentation for the meaning of these values. Default: "cache_statement".
- statement_cache_capacity.
The maximum size of the statement cache used when executing a query with "cache_statement" query exec mode.
Default: 512.
- description_cache_capacity.
The maximum size of the description cache used when executing a query with "cache_describe" query exec mode.
Default: 512.
ParseConfigWithOptions behaves exactly as ParseConfig does with the addition of options. At the present options is
only used to provide a GetSSLPassword function.
RowsFromResultReader returns a Rows that will read from values resultReader and decode with typeMap. It can be used
to read from the lower level pgconn interface.
Type Parameters:
T: any RowTo returns a T scanned from row.
Type Parameters:
T: any RowToAddrOf returns the address of a T scanned from row.
Type Parameters:
T: any RowToAddrOfStructByName returns the address of a T scanned from row. T must be a struct. T must have the same number
of named public fields as row has fields. The row and T fields will be matched by name. The match is
case-insensitive. The database column name can be overridden with a "db" struct tag. If the "db" struct tag is "-"
then the field will be ignored.
Type Parameters:
T: any RowToAddrOfStructByNameLax returns the address of a T scanned from row. T must be a struct. T must have greater than or
equal number of named public fields as row has fields. The row and T fields will be matched by name. The match is
case-insensitive. The database column name can be overridden with a "db" struct tag. If the "db" struct tag is "-"
then the field will be ignored.
Type Parameters:
T: any RowToAddrOfStructByPos returns the address of a T scanned from row. T must be a struct. T must have the same number a
public fields as row has fields. The row and T fields will be matched by position. If the "db" struct tag is "-" then
the field will be ignored.
RowToMap returns a map scanned from row.
Type Parameters:
T: any RowToStructByName returns a T scanned from row. T must be a struct. T must have the same number of named public
fields as row has fields. The row and T fields will be matched by name. The match is case-insensitive. The database
column name can be overridden with a "db" struct tag. If the "db" struct tag is "-" then the field will be ignored.
Type Parameters:
T: any RowToStructByNameLax returns a T scanned from row. T must be a struct. T must have greater than or equal number of named public
fields as row has fields. The row and T fields will be matched by name. The match is case-insensitive. The database
column name can be overridden with a "db" struct tag. If the "db" struct tag is "-" then the field will be ignored.
Type Parameters:
T: any RowToStructByPos returns a T scanned from row. T must be a struct. T must have the same number of public fields as row
has fields. The row and T fields will be matched by position. If the "db" struct tag is "-" then the field will be
ignored.
ScanRow decodes raw row data into dest. It can be used to scan rows read from the lower level pgconn interface.
typeMap - OID to Go type mapping.
fieldDescriptions - OID and format of values
values - the raw data as returned from the PostgreSQL server
dest - the destination that values will be decoded into
buildLoadDerivedTypesSQL generates the correct query for retrieving type information.
pgVersion: the major version of the PostgreSQL server
typeNames: the names of the types to load. If nil, load all types.
ErrTxCommitRollback occurs when an error has occurred in a transaction and
Commit() is called. PostgreSQL accepts COMMIT on aborted transactions, but
it is treated as ROLLBACK.
The PostgreSQL wire protocol has a limit of 1 GB - 1 per message. See definition of
PQ_LARGE_MESSAGE_LIMIT in the PostgreSQL source code. To allow for the other data
in the message,maxLargeObjectMessageLength should be no larger than 1 GB - 1 KB.
Map from namedStructFieldMap -> *namedStructFields
Map from reflect.Type -> []structRowField
Package-Level Constants (total 18, in which 17 are exported)
Cache statement descriptions (i.e. argument and result types) and assume they do not change. This uses the extended
protocol. Queries are executed in a single round trip after the description is cached. If the database schema is
modified or the search_path is changed after a statement is cached then the first execution of a previously cached
query may fail. e.g. If the number of columns returned by a "SELECT *" changes or the type of a column is changed.
Automatically prepare and cache statements. This uses the extended protocol. Queries are executed in a single round
trip after the statement is cached. This is the default. If the database schema is modified or the search_path is
changed after a statement is cached then the first execution of a previously cached query may fail. e.g. If the
number of columns returned by a "SELECT *" changes or the type of a column is changed.
Get the statement description on every execution. This uses the extended protocol. Queries require two round trips
to execute. It does not use named prepared statements. But it does use the unnamed prepared statement to get the
statement description on the first round trip and then uses it to execute the query on the second round trip. This
may cause problems with connection poolers that switch the underlying connection between round trips. It is safe
even when the database schema is modified concurrently.
Assume the PostgreSQL query parameter types based on the Go type of the arguments. This uses the extended protocol
with text formatted parameters and results. Queries are executed in a single round trip. Type mappings can be
registered with pgtype.Map.RegisterDefaultPgType. Queries will be rejected that have arguments that are
unregistered or ambiguous. e.g. A map[string]string may have the PostgreSQL type json or hstore. Modes that know
the PostgreSQL type can use a map[string]string directly as an argument. This mode cannot.
On rare occasions user defined types may behave differently when encoded in the text format instead of the binary
format. For example, this could happen if a "type RomanNumeral int32" implements fmt.Stringer to format integers as
Roman numerals (e.g. 7 is VII). The binary format would properly encode the integer 7 as the binary value for 7.
But the text format would encode the integer 7 as the string "VII". As QueryExecModeExec uses the text format, it
is possible that changing query mode from another mode to QueryExecModeExec could change the behavior of the query.
This should not occur with types pgx supports directly and can be avoided by registering the types with
pgtype.Map.RegisterDefaultPgType and implementing the appropriate type interfaces. In the cas of RomanNumeral, it
should implement pgtype.Int64Valuer.
Use the simple protocol. Assume the PostgreSQL query parameter types based on the Go type of the arguments. This is
especially significant for []byte values. []byte values are encoded as PostgreSQL bytea. string must be used
instead for text type values including json and jsonb. Type mappings can be registered with
pgtype.Map.RegisterDefaultPgType. Queries will be rejected that have arguments that are unregistered or ambiguous.
e.g. A map[string]string may have the PostgreSQL type json or hstore. Modes that know the PostgreSQL type can use a
map[string]string directly as an argument. This mode cannot. Queries are executed in a single round trip.
QueryExecModeSimpleProtocol should have the user application visible behavior as QueryExecModeExec. This includes
the warning regarding differences in text format and binary format encoding with user defined types. There may be
other minor exceptions such as behavior when multiple result returning queries are erroneously sent in a single
string.
QueryExecModeSimpleProtocol uses client side parameter interpolation. All values are quoted and escaped. Prefer
QueryExecModeExec over QueryExecModeSimpleProtocol whenever possible. In general QueryExecModeSimpleProtocol should
only be used if connecting to a proxy server, connection pool server, or non-PostgreSQL server that does not
support the extended protocol.