our existing interface features a digital twin representation of the world which users can explore in mixed Reality by physically moving or by using the VR controller to teleport to a desired location this world viw integrates all perception inputs from depth cameras as well as a third-person view of the robot generated from proprioception sensors and state estimation the interface also incorporates a panel that presents the camera feed showcasing the robot's point of view which users can position according to their preference the user can operate the robot using VR trackers and controllers during kinematic streaming a yellow virtual ghost robot appears overlaying the robot visualizer model the ghost serves as a reference representing the solution found by the ik algorithm based on the current VR input to use tele operation assistance the user can refer to an information panel displayed on the right VR controller whenever an object is detected for which the assistance is available this panel informs the user that the assistance can be activated by pressing a dedicated button on the left controller once activated the panel prompts the user to move and start performing the task this allows the robot to analyze the user's initial inputs and determine trajectories for each controlled body part accordingly following this initial processing the system presents a preview of the computed motion to the user predictability over the robot's actions is provided through a visualization of the robot with a gray ghost preview in spline trajectories at this point the user has the option to either accept or reject the suggested motion by pressing a dedicated button on the VR controller upon validation the user gains control over the robot and can direct the movements along with the computer trajectories by tilting the joystick of the controller forward the system follows a series of operations the learning phase involves recording a few demonstrations of the task performed in different ways by a human operator in a simulated environment Pro MPS are learned from these demonstrations during tele operation the system system recognizes the current task by identifying the corresponding Pro and P aided by semantic object information the system then updates the posterior distribution of the proom PS using the initial user input in the estimated object posst the mean trajectory of the updated prmp is used as a reference for the robot controller if an affordance is available for that object a blending mechanism is used to achieve smooth transitions between the prmp generated motions and the affordance templates to emphasize the adaptability of the user intention of our system we extend our evaluation to include a punching task in this example the user merely initiates a portion of the punching motion specifically indicating a hook punch technique recognizing this the robot discerns that it is tasked with a punching action and among various techniques available it should execute a hook punch the user does not need to focus on the Precision of the subsequent motion as the initial input is sufficient for the robot to complete the task accurately in such cases the preview feature is turned off to enable quicker execution of the motion here is the teley operation assistant in action demonstrating various punching techniques on different targets TR