The error means just what it says: more than one row returned by a subquery used as an expression The expression in the WHERE clause expects a single value (just like you substituted in your added test), but your subquery returns multiple rows. jsonb_array_elements() is a set-returning function. Assuming this...

It is not really DESC LIMIT 1, the DESC is connected to ORDER BY t.timestamp02 DESC and LIMIT 1 is a second clause. The documentation will tell you, that DESC will cause the ORDER to order in descending order, and LIMIT 1 will limit the resultset to 1 item. In...

I do not think there is an operator which you can use in such a simple query. You can achieve this with the query below, though I do not know whether it is the simplest way: select distinct on (id) id, data from ( select id, data, (json_each(data)).value::text from json_table...

Prepared statements are used to speed up the repeated execution of the same query with different arguments. If your aim is to insert many rows at once it is better to execute regular insert query, which will be faster than the prepared insert. However, if you insisted on this solution,...

Laravel use constraint on character varying for enum. Assuming there is a table mytable with an enum column status, we have to drop the constraint (named tablename_columnname_check) then add it in a migration like this: DB::transaction(function () { DB::statement('ALTER TABLE mytable DROP CONSTRAINT mytable_status_check;'); DB::statement('ALTER TABLE mytable ADD CONSTRAINT mytable_status_check...

The INSERT will just insert all rows and nothing special will happen, unless you have some kind of constraint disallowing duplicate / overlapping values (PRIMARY KEY, UNIQUE, CHECK or EXCLUDE constraint) - which you did not mention in your question. But that's what you are probably worried about. Assuming a...

I think no. sorry. day, month etc are field in type interval. it will be in English. like you don't expect use "vybrac" instead of select :) But you can have locale in time values, yes. td=# set lc_time TO pl_PL; SET td=# SELECT to_char(to_timestamp (4::text, 'MM'), 'TMmon'); to_char ---------...

There is no way to do what you want in PostgreSQL as it stands. It'd be interesting to do but a fair bit of work, very unlikely to be accepted into core, extremely hard to do with an extension, and likely to have worse side-effects than you probably expect. You'd...

The error is actually not about the condition or anything in the function itself, but the syntax of the function creation. You start the function definition with $func$ and end it with $$. This will not work. Change the $func$ to $$ to fix the syntax. ...

request.user.user_profile makes an SQL query to get user profile, then user_profile.games_liked.all() makes second sql query to get liked games. So, form this point of view Game.objects.filter(likes=request.user) is better. You can check what sql was executed, in shell, by looking into connection.queries variable (available only if setting DEBUG=True) from django.db import...

Because you are using postgresql: def self.search(query) where("description ilike ?", "%#{query}%") end Just use ilike instead of like. like/ilike documentation If you want to use =, both sides either UPPER or LOWER...

This query uses WITH construction that works similar to sub-queries. Investigate this query with EXPLAIN before use in production because it may be slow on big tables: WITH orders AS ( SELECT email , first_value(dt_cr) OVER wnd1 AS min_date , nth_value(dt_cr, 2) OVER wnd1 AS second_date FROM orders WINDOW wnd1...

There are several ways to do it: Assuming a._id is a serial column: insert into a (data) values ('foo'); insert into b (_id, other_data) values (lastval(), 'foobar'); Edit (after discussion in comments): Note that lastval() is concurrency safe (as all sequence related functions). Even if other sessions insert something into...

Assuming you know it's a trigger function (i.e. RETURNS TRIGGER), this should do it: SELECT tgname, tgrelid::regclass FROM pg_trigger WHERE tgfoid = 'func1'::regproc If func1 is overloaded, you would need to use e.g. tgfoid = 'func1(text,text)'::regprocedure. But in general, it might also appear in pg_aggregate, or pg_cast, or in a...

ID is unique and group by ID works just like a plain select. Column createdAt is not unique and results with same createdAt value must be grouped. You should provide a way how they will be grouped - use aggreagete function, remove them from select clause or add them to...

You can achieve this with a window function which doesn't require everything to be grouped: select * from ( select addresses.phone, addresses.name, orders.order_number, count(orders.order_number) over (partition by addresses.phone) as cnt from orders inner join carts on orders.order_number = carts.id inner join address on carts.address_id = addresses.id ) t where cnt...

From the docs: Statement-level BEFORE triggers naturally fire before the statement starts to do anything, while statement-level AFTER triggers fire at the very end of the statement. http://www.postgresql.org/docs/9.4/interactive/trigger-definition.html In other words, a before statement trigger will be run just before the statement executes, and an after statement trigger will be...

Your table uses a carid value to retrieve the corresponding part_ids from function PartsPerCar() which returns a set of rows, a so-called table function. In sub-section 5 (keep on reading) we see that Table functions appearing in FROM can also be preceded by the key word LATERAL, but for functions...

Just pad the columns you don't have and union. For instance: select activities.type, sum(activity_fees.amount) ... Union SELECT 'dummy', avg(activities.rating) .... Or just include activities.type since you have it available!...

If you DELETE rows or UPDATE them, then VACUUM is required to free the space for re-use. PostgreSQL usually does this automatically with autovacuum, so it is not common for you to need to manually run VACUUM. You might manually run VACUUM after updating a large proportion of a table,...

In an AFTER INSERT trigger, any changes you make to NEW.time_index will be ignored. The record is already inserted at this point; it's too late to modify it. Create the trigger as BEFORE INSERT instead....

Writing something to a storage will prevent all other threads from writing to the same. You cannot simply make everything multithreaded. In this case all other parts Need to wait for the previous one to finish....

Your question update made clear that you cannot do this in a single statement. Packed into CTEs of the same statement, both operations (INSERT and DELETE) would see the same snapshot of the table and execute virtually at the same time. I.e., the INSERT would still see all rows that...

The query in the question has a couple of obvious nonsense parts. Since you have been executing it before, I assume these are artifacts from manual simplification? Like: total * 100 / total would be pointless since it burns down to just 100. Or: joins without join condition, which are...

You need to lock the model: account = Account.first account.with_lock do # This block is called within a transaction, # account is already locked. account.balance -= 100 account.save! end You can read more about it here: http://api.rubyonrails.org/classes/ActiveRecord/Locking/Pessimistic.html...

The advantages and disadvantages of PLV8 are same as advantages and disadvantages of PLPerl, PLPython and other PL languages. It is not integrated with PostgreSQL engine - the processing SQL statements result can be slower. PLpgSQL is fully integrated to PostgreSQL engine. SQL is not integrated to language - isn't...

A trigger procedure is created with the CREATE FUNCTION command, declaring it as a function with no arguments and a return type of trigger. You can use the arguments passed to the trigger function via TG_ARGV, e.g. TG_TABLE_NAME - the name of the table that caused the trigger invocation....

You are trying to use COUNT(sale_order_line.name) as a group by item. Aggreagte functions work on grouped item. They are not for grouping them. I do not know your tables but try Select stock_inventory_line.product_code AS Sku, COUNT(sale_order_line.name) AS Qty_Sold, stock_inventory_line.product_qty AS Current_Qty, (stock_inventory_line.product_qty / COUNT(sale_order_line.name)) AS NOM From sale_order_line, product_product, product_template,...

Npgsql is an simply a driver for interacting with the database, HTML formatting of results is completely out of its scope (unlike psql, which is a user-facing console app). Note that there wouldn't be any "universal" way to format the HTML in a way that would satisfy everyone. However, it...

I interpret your remark but that result can't to be negative as requirement to return 0 instead of negative results. The simple solution is GREATEST(): SELECT GREATEST(sum(amount) - (SELECT sum(amount) FROM solicitude WHERE status_id = 1 AND user_id = 1), 0) AS total FROM contribution WHERE user_id = 1; Otherwise,...

You can just repeat what you wrote: select Greatest(p.price,0) as newprice, sum(q.qty) as qty, Greatest(p.price,0) * sum(q.qty) as qty or, you can wrap select to from (PostgreSQL: using a calculated column in the same query) select tmp.newprice, tmp.qty, tmp.newprice * tmp.qty from ( select Greatest(p.price,0) as newprice, sum(q.qty) as qty,...

The postgres:9.4 image you've inherited from declares a volume at /var/lib/postgresql/data. This essentially means you can can't copy any files to that path in your image; the changes will be discarded. You have a few choices: You could just add your own configuration file as a volume at run-time with...

If all you need is a count of the number of rows where player_id is 1, then you can do this: SELECT count(*) FROM your_table_name WHERE player_id = 1; If you want to count the number of rows for each player_id, then you will need to use a GROUP BY:...

CROSS JOIN the latest date to generate_series(), followed by a LEFT JOIN to the main table. SELECT ARRAY( SELECT count(d.invoicedate) AS ct FROM ( SELECT max(invoicedate) AS last_date FROM dok WHERE invoicedate < current_date -- "maximum date before current date" ) l CROSS JOIN generate_series(0, 11*7, 7) AS g(days) LEFT...

My advice is not to build you SQL statements through concatenation of strings as you are doing. I would use the PreparedStatement class that brings methods to deal with so many different types, as arrays, and it brings additional benefits as protection from value injections from evil users. For instance,...

Query The query is not as simple as it looks at first. The shortest query string does not necessarily yield best performance. This should be as fast as it gets, being as short as possible for that: SELECT p.username, COALESCE(w.ct, 0) AS won, COALESCE(l.ct, 0) AS lost FROM ( SELECT...

You can use extract(days from (action_at - registered_at) / 7)+1 to get the number of weeks. Then count the number of actions grouped by the number of weeks. select user_id, wk, count(*) actions from (select user_id, extract(days from (action_at - registered_at) / 7)+1 wk from Table1) a where wk <=...

As @tadman says, you should never use string concatenation to compose your query - that is the source of SQL injection. However, there's no need to prepare your statement. Use parameter placeholders in your query, something like the following should work: string UpdateCmd = "update dx set chronic = @p1...

Since fleets is an array column you have a couple of options. Either use the ANY construct directly (no need to unnest()): SELECT * FROM vehicles WHERE fleet_fk = ANY(SELECT fleets FROM auth_user WHERE id = 4); Or rewrite as join: SELECT v.* FROM auth_user a JOIN vehicles v ON...

You shouldn't build SQL by putting your variables directly via string concatenation. What happens here is that with 11, your SQL becomes: set last=11 Which is valid SQL (using 11 as a integer literal), while with xx it becomes: set last=xx There are no quotes, so the SQL means you're...

Postgresql comes preinstalled on Cloud9. You simply can't run bundle exec rake db:migrate though, because you have to set it up and connect to it first. Refer to the documentation here https://docs.c9.io/v1.0/docs/setting-up-postgresql on how to set it up. Also, you dont need to install postgresql to be able to deploy...

If your case is as simple as the example values suggest, @Giorgos' answer serves nicely. However, that's typically not the case. If the id column is a serial, you cannot rely on the assumption that a row with an earlier time also has a smaller id. Also, time values (or...

The problem is the migration from Process.milestone as a boolean field to Process.milestone as a foreign key. Postgres doesn't wait for a migration to fail on uncastable data. It wants a rule to alter the table in advance. If you don't intend any sort of data migration between two fields,...

Use CASE expression in ORDER BY clause: SELECT category FROM ( SELECT DISTINCT category FROM merchant ) t ORDER BY CASE WHEN category = 'General' THEN 0 ELSE 1 END, category ASC CASE guarantees that rows with General will be sorted first. The second argument orders the rest of the...

There is no built-in facility to do what you want, at least in PostgreSQL. Doing it effectively would require signifciant changes to how data is stored, as currently each row is independent of all other rows (well, except TOAST pointers for out-of-line stored data that's unchanged in an UPDATE). A...

I solved the problem by changing my reduce function so that if there were not the correct amount of fields to output a certain value and then I was able to use the --input-null-non-string with that value and it worked.

This is about right. Please do not use port 5432 for PostgreSQL. Pick a non-default port well outside that range, like 5599 or something. Change the port by modifying postgresql.conf and setting the port directive. You might find it useful to simply append include_dir = conf.d or similar in your...

According to my internet search, it is not possible to perform both insert and update directly to postgreSQL DB. Instead you can create a storedProc/function in postgreSQL and you can send data there.. sqoop export --connect <url> --call <upsert proc> --export-dir /results/bar_data Stored proc/function should perform both Update and Insert....

You can create a temporary table (so that it will visible only in the session). In the row level trigger you insert the rows into the temporary table, in the statement level trigger you select (and delete) from the temporary table.

Answer for timestamp You need to understand the nature of the data types timestamp without time zone and timestamp with time zone (names can be deceiving). If you don't, read this first: Ignoring timezones altogether in Rails and PostgreSQL The AT TIME ZONE construct transforms your timestamp to timestamptz, which...

I don't think update_attribute is going to be useful as it will replace the array with the new value rather than append to it (but see better explanation below in --Update-- section). I'm not sure what best practices are here, but this should work to just add something if it...

If you are sure that there are never more than 10 elements, you can simply cast your hex string to text: INSERT INTO my_table (hex_text) VALUES (<some hex data>::text); Or use a bytea column instead?...

CURRENT_TIME is a reserved word (and a special function), you cannot use it as variable name. You don't need a variable here to begin with: CREATE OR REPLACE FUNCTION test_func(OUT pid bigint) AS $func$ BEGIN INSERT INTO "TEST"(created) VALUES (now()) RETURNING id INTO pid; END $func$ LANGUAGE plpgsql; now() is...

Just cast it to a date: select sum(use), localminute::date from tablename group by localminute::date; or using standard SQL: select sum(use), cast(localminute as date) from tablename group by cast(localminute as date); ...

So the problem was that em.createNativeQuery(...) was not the correct invocation of a NamedNativeQuery in order to do that I should've invoked em.createNamedQuery(...). However, seeing that em.createNativeQuery(...) does not accept @SqlResultSetMapping it is very difficult to map the result to a custom class. The end solution was to use return...

Use pg_drop_replication_slot: select pg_drop_replication_slot('bottledwater'); See the docs and this blog. The replication slot must be inactive, i.e. no active connections. So if there's a streaming replica using the slot you must stop the streaming replica. Or you can change its recovery.conf so it doesn't use a slot anymore and restart...

The most efficient way of doing this would be something like this: create a staging table with the same structure as the target table but without the unique constraint batch insert all rows into that staging table. The most efficient way is to use copy or use the CopyManager (although...

SELECT itemid, deadlineneeded, sum(quantity) AS total_quantity FROM <your table> WHERE (deadlineneeded - delievrydate)::int >= 1 GROUP BY 1, 2 ORDER BY 1, 2; This uses a "delievrydate" (looks like a typo to me) that is at least 1 day before the "deadlineneeded" date, as your sample data is suggesting....

The query to prepare should be insert into test values(?,lo_import(?)); You proposal insert into test values(?,?) can't work because you can't submit a SQL function call (lo_import) as the value for a placeholder (?). Placeholders only fit where a literal value would fit. When you're asking what's the prepared statement...

It turned out (please see comments), that this query: SELECT MIN(minimal) AS minimal FROM ( SELECT MIN("products"."shipping") AS minimal FROM "products" WHERE "products"."tag_id" IN (?,?,?,?,?,?,?) GROUP BY "tag_id" ) some_alias is able to deceive PostgreSQL in such a way, that it performs better because, as I guess, it uses the...

The below query fills empty values in the resultset of your original query. The method consists in splitting the data into partitions according to the number of empty values and selecting the first (non-empty) value from each partition (add * to the select to see how it works). WITH survey...

You can DRY up the duplication of the projection with a CTE, and then use this in your WHERE predicate: WITH myCte AS ( select order_id , order_item_id , sku ,merchant_payable, order_created_at , case when name like 'Rise%' then amount-(((amount*12.14)/100)+ ((amount*3.08)/100) + 51.30) when name like 'Masha%' then amount-(((amount*9.10)/100)+ ((amount*3.08)/100)...

This is a bit tricky, because it is more than a "simple join". There are several ways to approach this. One method uses aggregation and a having clause: select op.order_id from order_products op group by op.order_id having sum(case when op.status = 'shipped' then 1 else 0 end) = count(*); That...

You can choose between: select aschema.afunction() as my_name; -- like in IMSoP's answer select my_name from aschema.afunction() as my_name; -- with alias select afunction from aschema.afunction(); -- with function name If you add aschema to search path, you can omit schema identifier: set search_path to public, aschema; select afunction() as...

As @Craig explained, you can't (and even if you could, it would be fairly pointless). The usual way of implementing restricted superuser permissions is to connect as an existing superuser role, and create SECURITY DEFINER functions containing a limited set of approved commands. These functions will now be executed with...

Do you have PostgreSQL installed locally? If not, that might be the reason for that (having gem 'pg' is not enough to install it locally). You will have to run sudo apt-get install postgresql postgresql-contrib to install. You do not need it installed locally to push to Heroku though...as long...

You can do this much more efficiently with a single SQL statement with data-modifying CTEs. No function required (but possible, of course), no looping, no exception handling: WITH plan AS ( SELECT recid, cdesc, min(recid) OVER (PARTITION BY cdesc) AS master_recid FROM cpt ) , upd_lab AS ( UPDATE lab...