During my PhD I've investigated how virtual machine introspection can enable a secure virtual machine to access the status of a monitored machine to check the kernel integrity and the process self. The process self is computed statically by analyzing the source code, and is defined through a context-free grammar, which defines the system call traces that the process may issue during its execution, and a set of invariants each associated with a program point where the process invokes a call. Further examples I've worked on during my partnership at IBM Zurich are the design and implementation of a mechanism to transparently inject and protect a context-agent into a running virtual machine using introspection. This enables a transparent retrieval of reliable high-level information about the internal operation of the monitored virtual machine while having confidence that the in-guest agent has not been compromised. You can find more info here..
At Imperial College London, after completing a survey on more than 200 papers on virtualization security, I've observed that many publications often rely on implicit, and different, assumptions. Threat models are often presented in different ways making it difficult to evaluate the efficacy of solutions: which threats do they address? and under which assumptions? For this reason, I've been working on a definition of an uniform framework to define the threat models, their protection goals and trusted computing base for proposed solutions. You can find more info here.