Project Home
Project Home
Documents
Documents
Wiki
Wiki
Discussion Forums
Discussions
Project Information
Project Info
Forum Topic - ETFS: optimal settings for ETFS_MEMTYPE_RAM questions: (3 Items)
   
ETFS: optimal settings for ETFS_MEMTYPE_RAM questions  
I am trying to save critical application data to battery backed up SRAM devices.  I have modified the fs-etfs-ram MTD to
 produce fs-etfs-sram* variants for both linearly mapped SRAM devices and SRAM that appears like a block device.

PRIMARY QUESTION:

What are the ideal command line settings for an fs-etfs-*ram* driver variant where the filesystem acts as if always 
running sync'ed?  Sync'ed behavior is a requirement.

SECONDARY QUESTION:

What is with all the unecessary activity going on in Case 2 & 3 described below?

BACKGROUND:

The following command line options have be tested:

-c0 -C0 -o0 -x0 -s0 -r0 -e -D size=1,clustersize=512,clusters2blk=32

These were chosen so that the filesystem would always be initialized and empty and that there would be no caching 
(always sync'ed) before running the test case.  However, I am not getting the behavior I expect or desire.

The test cases are:

 1)
 -open file  O_RDWR|O_TRUNC|O_CREAT, S_IRUSR|S_IWUSR|S_IXUSR
 -start write loop of 700KB, 1KB at a time

2)
 -open file  O_RDWR|O_TRUNC|O_CREAT, S_IRUSR|S_IWUSR|S_IXUSR|O_SYNC
 -start write loop of 700KB, 1KB at a time

 3)
 -open file  O_RDWR|O_TRUNC|O_CREAT, S_IRUSR|S_IWUSR|S_IXUSR
 -start write loop of 700KB, 1KB at a time, fsync() after each 1KB write()

EXPECTATIONS:

In each case the file written to an empty and initialized fs does not approach the size limits so I assume no background
 processing like blkerase, defrag, etc need to occur.

No caching!  Each libetfs io_write() entry would result in calls to devio_postcluster().

OBSERVATIONS:

ALL devio_*() have unique _NTO_TRACE_INSERTSUSEREVENT trace events.

The stock QNX fs-etfs-ram driver behaves the exact same way as my fs-etfs-sram* drivers, that is...

Case 1:  

-no postcluster after a write (something is being cached)
-postcluster is the only devio used (no erase, no readcluster)

Case 2:  

-no postcluster after a write (something is being cached)
-postcluster only after sync
-as expected after each write, sync is called (O_SYNC on file open)
-after some number of writes a BUNCH of erase, readclusters, postcluster occur.  Huh?  The fs is less than 1/2 full at 
this point

Case 3:

same results as Case 2.  just that sync occurs due to explicit call.

  











 
Re: ETFS: optimal settings for ETFS_MEMTYPE_RAM questions  
Case #2 above should read:

 -open file  O_RDWR|O_TRUNC|O_CREAT|O_SYNC, S_IRUSR|S_IWUSR|S_IXUSR
 
Re: ETFS: optimal settings for ETFS_MEMTYPE_RAM questions  
> I am trying to save critical application data to battery backed up SRAM 
> devices.  I have modified the fs-etfs-ram MTD to produce fs-etfs-sram* 
> variants for both linearly mapped SRAM devices and SRAM that appears like a 
> block device.
> 
> PRIMARY QUESTION:
> 
> What are the ideal command line settings for an fs-etfs-*ram* driver variant 
> where the filesystem acts as if always running sync'ed?  Sync'ed behavior is a
>  requirement.
> 
> SECONDARY QUESTION:
> 
> What is with all the unecessary activity going on in Case 2 & 3 described 
> below?
> 
> BACKGROUND:
> 
> The following command line options have be tested:
> 
> -c0 -C0 -o0 -x0 -s0 -r0 -e -D size=1,clustersize=512,clusters2blk=32
> 
> These were chosen so that the filesystem would always be initialized and empty
>  and that there would be no caching (always sync'ed) before running the test 
> case.  However, I am not getting the behavior I expect or desire.

ETFS requires cache blocks to hold the data from the client, before writing.  That's why the cache size will not drop to
 zero.

Synchronous writing is achieved as you've done, by using fsync() or O_SYNC on the open.

> 
> The test cases are:
> 
>  1)
>  -open file  O_RDWR|O_TRUNC|O_CREAT, S_IRUSR|S_IWUSR|S_IXUSR
>  -start write loop of 700KB, 1KB at a time
> 
> 2)
>  -open file  O_RDWR|O_TRUNC|O_CREAT, S_IRUSR|S_IWUSR|S_IXUSR|O_SYNC
>  -start write loop of 700KB, 1KB at a time
> 
>  3)
>  -open file  O_RDWR|O_TRUNC|O_CREAT, S_IRUSR|S_IWUSR|S_IXUSR
>  -start write loop of 700KB, 1KB at a time, fsync() after each 1KB write()
> 
> EXPECTATIONS:
> 
> In each case the file written to an empty and initialized fs does not approach
>  the size limits so I assume no background processing like blkerase, defrag, 
> etc need to occur.
> 
> No caching!  Each libetfs io_write() entry would result in calls to 
> devio_postcluster().

Caching is a requirement.  However specifying O_SYNC will gaurentee that the data is flushed to disk before the client 
write() call returns.

> 
> OBSERVATIONS:
> 
> ALL devio_*() have unique _NTO_TRACE_INSERTSUSEREVENT trace events.
> 
> The stock QNX fs-etfs-ram driver behaves the exact same way as my fs-etfs-sram
> * drivers, that is...
> 
> Case 1:  
> 
> -no postcluster after a write (something is being cached)
> -postcluster is the only devio used (no erase, no readcluster)

The data is being cached, and flushed once the cache buffers are full.  Metadata is only being updated when the file is 
synced.  

> 
> Case 2:  
> 
> -no postcluster after a write (something is being cached)
> -postcluster only after sync
> -as expected after each write, sync is called (O_SYNC on file open)
> -after some number of writes a BUNCH of erase, readclusters, postcluster occur
> .  Huh?  The fs is less than 1/2 full at this point

The file is growing, and the timestamps are being updated.  Two things are being written: the new 1kB of file data, and 
also the fileentry in the .filetable.  The O_SYNC forces the writes to be flushed on each write() call, so there will be
 700 updates to the fileentry structure.  These updates are causing stale transactions, which result in partially dirty 
blocks.  Eventually no blocks are clean anymore, and a reclaim operation is triggered.  This is the readcluster/eraseblk
 that you are seeing.

You don't specify how large the partition is, but with a 512 byte clustersize, you need three clusters for every write()
, giving 2100 clusters total.  At 32 clusters2blk, approximately every 11 writes the filesystem will start dirtying a 
new block.


> 
> Case 3:
> 
> same results as Case 2.  just that sync occurs due to explicit call.

Same as above.

> 
>   
> 
>...