my table wont update

Its a very complicate problem and i wiil try to be as brief as i can. I hope you ll be able to understand my description.

i ve coded a script that works perfect in my localhost. When i uploaded the files to a server some things just did not work.

For the script i used 2 mysql tambles. The tables have the same exact structure and different names. They are 'big' tables. 46 fields each.

the fields from the tables fills by forms section by section (lets say that a section is 3-5 fields). (one insert and then with updates).

It seems that when most fields are already filled up the next fields just wont update. There are no errors. When i try to fill up those fields by hand using PHPMYADMIN and the same sql command i used in my code the fields updates just fine.

Is there something i am missing? Do u thing it is a coding mistake or there is a apache setting i did not think about? it is important to mention that if i reapet the procedure by starting filling the table from the section that wouldnt update it will, and soon another section wont update.

i know i may did not give a quite good description. If you dont understand what i am saynig i will try again including some code..

the tables magazine_tmp and magazine have the same structure. By the time this code executes some fields have data of one table and some fielda of the other - not necessary the same.
when i copy the sql statment that wont execute and pasted it into phpmyadmin and executed it there it worked...

Yes, the error reporting helped. I feel stupid that i did not thought it myself

the message

Error - Could not perform the query: Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.

I am not sure what i will do next but its a good start. i am not sure about ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED.
if you have anysuggestions i am listening...

anyone? How can i handle that big loaded row? i googled about it and i found out about Barracuda file format. Is it what i am looking for? Has anyone tried it? can i use it only by code or i will have to enter into php.ini (i can not do that). Any alternatives? i am stack in there and i am waiting for your experience to give me an advice about it..

Before wading in here. Are you sure your tables are set upo properly?
They seem big to me - especially as you've got fieldnames like 'author7'... suggesting that you have author1-6 too. If this is the case, then you need to consider redesigning your DB and look up 'normalization'.

edition_articles

Yes, all your assumptions are right. I have a simple small magazine part of a school web page.
The students through forms are filling 7 articles with surrunding data to the table magazine_tmp.

The teacher - admin, checks the articles and if he agrees he pushing a button that moves all the data to table magazine which is the table that the magazine gets the data.

I thought it that way, i did it and it seemed to worked. In my last test i loaded the database with a lot of data i got these errors.

it is interesting the design u suggest but i was wondering if i could use something easier and less time consuming. Can i keep these tables somehow? maybe by compresing data (i dont know if it is possible) or by raising the mount of data a sigle row can get. what about that Barracuda file format. it is applicable in my case?

You can certainly carry on with your schema, but it'snot normalized and therefore prone to errors and duplication. Duplication will swell your tables to silly proportions.

AFAIK Barracuda is InnoDB only, other than that I know nothing about it.

With normalized data you could do this, assuming one author per article:

articles

article_id [PK, int]
author_id
title [varchar, 150]
content [blob]

Then that article could be shared amongst many editions, say if you wanted to reprint it. It would also help if you need to search for articles. You now only have to search on one (or two if you include the title) fields instead of 20 (or however many you have - assuming 10 articles per row).

Although you may be able to do it your way if you're able to compress the data, it's not recommended.

I am familiar with 'normalization'. There is no way with the way my schema works to have conflicts and duplications. every article (1, 2..7) has its own form and webpage. It is allowed one article per month. (monthly). if an article exists in a particular month the system updates the changes (no extra insertions). I am not sure if i explained the schemma and system right but there is no way of duplications. It is a very simple magazine and i dont want to make it more complicated (i know that it would be better but it is not my pursuit right now)

The problem is that for every edition (every month) i store all the data needed in a sigle row. I did not know that there is a limit in data i can store in a row. When i found out about that problem (with daniweb help) i thouhgt it would be easy to solve it with some lines of code. It seems that it can not happen.

now i am thinking to 'break' the magazine table into 7 tables one for each article.

article1

month (varchar(3), key
year (varchar(4), key
title (varchar (150))
author (varchar 100)
image (varchar(3)) //checkes if there is an image(yes or no)
ckecked (varchar (2)) //the tecaher administrator checks if the article is ok to publish at the edition of the particullar month and year
athro (text) //the content -i should use blob???

article2
the same...

I know those tables are not connected but i think they dont need to

i am goona wait maybe someone (diafol or someone else) comments all these or better if someone propose a solution to keep the schemma with one table (to store all data in one row) for a day maybe and if nothing happens i will close this thread.

Dourvas - I'm sorry if my suggestions aren't too helpful. Breaking the row down into tables will certainly help with the size issue, but as you note, they are no longer connected. I can't see how to make further recommendations. Good luck though and I hope you get it sorted.