Tag Info

That's because of Instant File Initialization. In short, SQL Server can take advantage of this privilege for database data files (not transaction log files). What this means is that SQL Server does not have to zero out the data file(s) when initializing.
Without the "Perform volume maintenance tasks" privilege granted to the SQL Service account, upon the ...

In short: No.
There will be a very very small difference in parsing time for statements that specifically mention the index or generating output that mention the index, but this is so vanishingly small compared to all the other work that teh database engine it doing that it is simply noise - far too small to even reliably measure.
When the index name is ...

I had another look at this and can reproduce your issue. Try adding OPTION ( MAXDOP 1 ) to your query. In my test rig with a 300MB file this ran in 1 min 42 seconds. The unhinted version ran for 30 minutes at 100% CPU before I killed it.
You could also have a look at OPENXML. People often say it's faster with large XML files and it appears to be in this ...

You assumption about adjacency is correct.
If we use TPC-H as an example: Clustering the LINEITEMS table on on ORDERID will locate all order lines belonging to the same LINEITEM physically adjacent on disk. This speeds up queries that fetch all order lines for a given ORDERID. Clustering on the foreign key to the parent also allow fast merge joins between ...

I echo the "bad form" comment of @JohnM - design the thing properly, and if you have new requirements (or your design isn't perfect first time - unlikely I know :-) ), then choose to add new fields. Use JSON if it suits your clearly demonstrated requirements, otherwise stick with "normal" field types.
I've seen too many systems where these "spare fields" ...

Why have an id at all? Why not have PRIMARY KEY (user_id, post_id)?
Why have user_id and post_id nullable? Shouldn't they be NOT NULL?
@jynus is right about a covering index, but if you change the PK as I suggest, that separate index won't be necessary.
innodb_buffer_pool_size should normally be 70% of available RAM.
I don't see how (pre)caching would ...

NOT IN is typically the slowest option. LEFT JOIN / IS NULL is more promising. Or NOT EXISTS:
Help with this SELECT in the same table
Query
Just your query, formatted:
SELECT u.id_user, u.username
FROM "user" u
LEFT JOIN friend_request fr ON fr.sent_to = u.id_user
AND fr.sent_from = 288
LEFT JOIN friends f ON ...

When considering the impact of new index, the important things to think are how often rows are added to the table, is the indexed field getting updated (often) and how many distinct values there are in that field (selectivity).
For table this size, it's probably not going improve the performance that much, but if you're getting deadlocks it might help.
If ...

Like @dezso commented, creating a new table and dropping the old used to be faster in old versions, but not any more with the new implementation in pg 9.1.
The most common problem with CLUSTER is that it requires an exclusive lock on the table, which does not go well with concurrent access to it.
The solution to this problem is pg_repack, which does not ...

The post you found is from 2007. Rather start with the current manual:
When the PREPARE statement is executed, the specified statement is
parsed, analyzed, and rewritten. When an EXECUTE command is
subsequently issued, the prepared statement is planned and executed.
This division of labor avoids repetitive parse analysis work, while
allowing the ...

SHORT ANSWER
Only as a last resort
LONG ANSWER
Having multiple indexes can be a rather arduous adventure for MySQL Query Optimizer.
I have written about this before
Sep 18, 2012 : How are multiple indexes used in a query by MySQL?
Apr 19, 2014 : Optimizing indexes (Under the Heading ANSWER TO QUESTION #2)
In essence, MySQL will do lookups along ...

If you are testing a and b, INDEX(a, b) is likely to be better.
Indexing a flag (by itself) is almost never useful.
Please provide SHOW CREATE TABLE and a few WHERE clauses; I will give specific advice.
Here's a quick cookbook for building an INDEX that will often be optimal. Given a WHERE with a bunch of expressions connected by AND:
List all the ...

From what I understand, I will try to sketch a simple database schema. I will put the properties that could being customized by the store administrator into different tables from the "immutable" properties.
product table contains "immutable" property attributes:
+----+--------------+
| id | UPC |
+----+--------------+
| 1 | product ABC |
| 2 | ...

You don't want to be in the business of constantly dynamically creating a separate table for each store's admin to store their own data in. Instead, use a single table and designate a column to differentiate the admin's customization, organized by store.
You haven't given enough details to get into the minutiae of the implementation, but this approach can ...

There are other DMVs that can help you quantify this. sys.dm_exec_sessions has columns cpu_time, memory_usage and total_scheduled_time amongst others. While you're running your plan-getting query in one session you can interrogate this DMV in a second session to find how expensive the first is.
Pleasingly, you can also quantify how expensive your second ...

Install the extension pg_stat_statements with the SQL command
CREATE EXTENSION pg_stat_statements
You may want to make sure you create this by using an appropriate user (such as the user your application uses or some dba account). Be aware that whichever user creates the extension will also own it.
This will require a server restart for it to be usable ...