The mutex on the memory object protects:
When both have to be locked (usually), the address space must be locked before the object. This has implications when we want to do something to all the references of an object (e.g. vmm_resize). In those cases we'll have the object already locked, so we can't start locking the aspaces without causing deadlock scenerios. Instead the memref_walk function sets the struct mm_map inuse field as it traverses to each one. When a struct mm_map is freed, it gets put on a free list without disturbing any data except the next pointer. Allocations from the list check the inuse field and skip over any that have a non-zero value.
For non-MAP_LAZY mappings, a locked region (via mlock or mlockall) is made readable (if PROT_READ) immediately, but may not be immediately writable (if PROT_WRITE) to track modifications. A super-locked address space is always fully mapped immediately.
For MAP_LAZY mappings, memory is not allocated/mapped until first reference for any of the above types. Once it's been referenced, it obeys the above rules - that means that it's a programmer error to touch a MAP_LAZY area in a critical region (interrupts disabled or an ISR) that hasn't already been referenced.
The other mmap flags don't interact with the locking state.
The default state of memory before a locking function is performed is currently indeterminate until we can do performance tuning. It might be that the default state will be different depending on what the underlying object is (e.g. shared memory object vs file).