So, I am trying to teach myself partitions in PostgreSQL. I understand that a database can become slow when a table hits millions+ of lines and indexes don't fit into memory anymore.
One thing that ...

I am developing an app that send messages (email and sms) to users. The db must also maintains the list of messages send to users. And also the status of messages send to user like send or failed.
I ...

Can someone help me in writing the following query to relational algebra, I am studying DBMS and I am stuck on this exercise.
Query:
SELECT a.name, d.name. d.grade
FROM student a, enrolled i, class ...

I am currently working on small travel application in which users can add other users' trips in their wishlist. I am facing difficulty in designing database for wishlist.
What I have tried so far is:
...

I just began optimizing my schemas with indexes (pretty late I guess).
I never dealt with huge dataset but I'll be soon coping with about 1 to 5 millions rows.
Here is the schema I'm using right now ...

The main reason NoSQL dbms have become so common is the need of storing unstructured data where a Relational database instead would require carefully designing a database schema and fitting data into ...

Virtual machine software like VirtualBox allow one to make incremental VM clones. That is data, once "touched" (opened writable), will be copied and stored in the incremental cache of the new clone.
...

We have high traffic NEWS websites, I want to add a feature that every user can search through over all content of site, such as news, polls, comments, galleries,etc . Each of contents type has its ...

I've got a large (many columns) MySQL database table (InnoDB) that has a fairly high number of INSERTs (~500/day). Think of the records as financial transactions. Clients need to be able to view these ...

I'm asking this because Postgres was hard-stopped yesterday, and I fear that there could be partial / corrupt data in one of my archived log segments. I'd like to simply delete the logs from my slave ...

I am dealing with an existing application that uses the database as a sort of transaction log in several cases, for example orders or payments. These tables are large (20 - 60 million rows) and poorly ...

I have a mature 50+ tables web application based on mySQL. In order to do some advanced data mining I want to use Neo4j and the goodnes of cypher. However I'm having a hard time migrating my data from ...

This might be some ambiguous but I have not much info about this so I need some start point;
As a general idea; in conventional RDBMS', there are a DB engine and storage of data. We use the language ...

My understanding of optimistic locking is that it uses a timestamp on each record in a table to determine the "version" of the record, so that when the record is access by multiple processes at the ...