Both these ideas are feasible, and it should make no difference which one is actually employed. Apparently decoders should show a maximum of 1 bit error in each 16bit sample, so even if the profile was generated on a PC with a different decoder to the empeg the results should be the same.

If we had time I would recommend both these ideas be implemented as this would give maximum flexibility. However I fear the both are fairly large tasks and are unlikely to get done in the near future.

The power off problem I think could be fixed fairly easily, and if the implementation turns out to be straight forward I will try to get it into a future release.