Project Home
Project Home
Documents
Documents
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - etfs transactions during write : (5 Items)
   
etfs transactions during write  
Hi, 

if I produce a new file with .e.g. "dd if=/dev/zero of=writefile &", and switch of the power.
I get following after power on again:

fs-etfs-mpc5121: Truncating fid 0x2b3 from 958275584 to 0 bytes

It looks like the etfs Filesystem truncates the file to the size, at which the file was closed last. If case of a new 
file the last secure size seems to be 0.

If you create a logfile with write() , you can loose gigabytes of data, if power fails. 

Should I insert  close() and open() or some fsync() from time to time?

This problem does not occur with fs-qnx6. Her you only loose some blocks in case of powerfail.

What do I miss here?

Kind regards
Michael
Re: etfs transactions during write  
Ping.

> Hi, 
> 
> if I produce a new file with .e.g. "dd if=/dev/zero of=writefile &", and 
> switch off the power.
> I get following after power on again:
> 
> fs-etfs-mpc5121: Truncating fid 0x2b3 from 958275584 to 0 bytes
> 
> It looks like the etfs Filesystem truncates the file to the size, at which the
>  file was closed last. If case of a new file the last secure size seems to be 
> 0.
> 
> If you create a logfile with write() , you can loose gigabytes of data, if 
> power fails. 
> 
> Should I insert  close() and open() or some fsync() from time to time?
> 
> This problem does not occur with fs-qnx6. Here you only loose some blocks in 
> case of powerfail.
> 
> What do I miss here?
> 
> Kind regards
> Michael


Re: etfs transactions during write  
Michael Tasche wrote:

>>if I produce a new file with .e.g. "dd if=/dev/zero of=writefile &", and 
>>switch off the power.
>>I get following after power on again:
>>fs-etfs-mpc5121: Truncating fid 0x2b3 from 958275584 to 0 bytes
>>It looks like the etfs Filesystem truncates the file to the size, at which the
>> file was closed last. If case of a new file the last secure size seems to be 
>>0.

Yes, in a conflict betwen the file size and replay transactions, the
last size is used and excess data transactions are discarded.

To save NAND wear the inode entry (in the .filetable) is by default not
updated in tandem with the file data.  The trigger points are either
last close(), an explict fsync(), global sync(), or by using O_SYNC (but
beware the potential for flash lifetime/wear with the later approach).

>>If you create a logfile with write() , you can loose gigabytes of data, if 
>>power fails. 

If nothing updated the file size, correct.  This is why, for eg, slogger
gained a '-c' option!

>>Should I insert  close() and open() or some fsync() from time to time?

Yes, either, as above.  Arranging a periodic fsync is probably most
efficient.  You could peridically "sync" as well (requiring no internal
code changes) but of course this will hit all mounted filesystems, so
may not be appropriate.

>>This problem does not occur with fs-qnx6. Here you only loose some blocks in 
>>case of powerfail.

A major difference is that fs-qnx6 will periodically commit itself
('qnx6 snapshot=' period), in addition to allowing explicit user fsync()
or sync().  So the file size will be exactly consistent with the data
written at that point in time.  Other disk filesystems always schedule
an inode update in conjunction with data writes but use the delayed
write-behind ('blk delwri=') period, since contrasted to flash the issue
is one of performance rather than media wear; but they can't make the
same strong guarantees that fs-qnx6 does with regards to consistency
between file size and data content (one of the inode or new data must
hit the disk first).

ETFS ordering the data write before the inode combined with the
fid/transaction discard means the data should be consistent up to the
advertised size, but requires some application help to trigger the inode
write (in the absence of an internal periodic sync) to allow the file
size to advance.  In most cases said file eventually gets closed; the
logfile example is an obvious exception.

>>What do I miss here?
Re: etfs transactions during write  
Hello John, 

thank you very much for clarifying this to me. 
I wanted to be sure, that my flash-driver is really ok.
Our customner is producing increasing protcoll files in a 1GB-Nand-Flash.
So he should use fsync() from time to time.

But our customner found annother problem:
After  editing an old  text file on the target via eclipse/qconn connection, the text-file can result in a zero length 
file ("fs-etfs-mpc5121: Truncating fid ... from ...to 0" ), if power is switched off > 5min "after" the file has been 
saved via eclipse.

At the moment I hope this is a problem with the eclipse/qconn/target-filesystem access...

Kind Regards 
Michael


> Michael Tasche wrote:
> 
> >>if I produce a new file with .e.g. "dd if=/dev/zero of=writefile &", and 
> >>switch off the power.
> >>I get following after power on again:
> >>fs-etfs-mpc5121: Truncating fid 0x2b3 from 958275584 to 0 bytes
> >>It looks like the etfs Filesystem truncates the file to the size, at which the
> >> file was closed last. If case of a new file the last secure size seems to be 
> >>0.
> 
> Yes, in a conflict betwen the file size and replay transactions, the
> last size is used and excess data transactions are discarded.
> 
> To save NAND wear the inode entry (in the .filetable) is by default not
> updated in tandem with the file data.  The trigger points are either
> last close(), an explict fsync(), global sync(), or by using O_SYNC (but
> beware the potential for flash lifetime/wear with the later approach).
> 
> >>If you create a logfile with write() , you can loose gigabytes of data, if 
> >>power fails. 
> 
> If nothing updated the file size, correct.  This is why, for eg, slogger
> gained a '-c' option!
> 
> >>Should I insert  close() and open() or some fsync() from time to time?
> 
> Yes, either, as above.  Arranging a periodic fsync is probably most
> efficient.  You could peridically "sync" as well (requiring no internal
> code changes) but of course this will hit all mounted filesystems, so
> may not be appropriate.
> 
> >>This problem does not occur with fs-qnx6. Here you only loose some blocks in 
> >>case of powerfail.
> 
> A major difference is that fs-qnx6 will periodically commit itself
> ('qnx6 snapshot=' period), in addition to allowing explicit user fsync()
> or sync().  So the file size will be exactly consistent with the data
> written at that point in time.  Other disk filesystems always schedule
> an inode update in conjunction with data writes but use the delayed
> write-behind ('blk delwri=') period, since contrasted to flash the issue
> is one of performance rather than media wear; but they 
> can't make the
> same strong guarantees that fs-qnx6 does with regards to consistency
> between file size and data content (one of the inode or new data must
> hit the disk first).
> 
> ETFS ordering the data write before the inode combined with the
> fid/transaction discard means the data should be consistent up to the
> advertised size, but requires some application help to trigger the inode
> write (in the absence of an internal periodic sync) to allow the file
> size to advance.  In most cases said file eventually gets closed; the
> logfile example is an obvious exception.
> 
> >>What do I miss here?


Re: etfs transactions during write  
Michael Tasche wrote:
> But our customner found annother problem:
> After  editing an old  text file on the target via eclipse/qconn connection, the text-file can result in a zero length
 file ("fs-etfs-mpc5121: Truncating fid ... from ...to 0" ), if power is switched off > 5min "after" the file has been 
saved via eclipse.

IIRC, this is a known/related issue; the IDE keeps the fd to the edit 
file open, and so the last-close-filetable-update never gets to happen.

> At the moment I hope this is a problem with the eclipse/qconn/target-filesystem access...