Skip to content

Instantly share code, notes, and snippets.

@andreasplesch
Last active April 4, 2019 18:25
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save andreasplesch/f196e98c86bc9dc9686a7e5b4acede7d to your computer and use it in GitHub Desktop.
Save andreasplesch/f196e98c86bc9dc9686a7e5b4acede7d to your computer and use it in GitHub Desktop.
PlaneSensorScreenMode

Drag the box anywhere

The scene illustrates the proposed screen mode for PlaneSensor. In screen mode, the plane sensor allows dragging of an object in the current plane of the screen. This allows for free manipulation of an objects position.

Drag the red box from the default view point. It will be dragged in the XY (vertical) plane.

Then rotate the view to a map view, or choose the alternative view point pressing PgUp or the button. Drag the box and it will be dragged in the XZ (map) plane.

https://bl.ocks.org/andreasplesch/f196e98c86bc9dc9686a7e5b4acede7d https://gist.github.com/andreasplesch/f196e98c86bc9dc9686a7e5b4acede7d

Introduction

A common and useful dragging method is to constrain dragging to a plane parallel to the current orientation of screen. This is very intuitive and often what is expected if there is no obvious surface or plane to constrain dragging to. It also allows for flexible yet accurate dragging based on viewpoints with known viewing directions such as a map view or east-west cross-sectional view.

However, X3D currently does not offer such functionality. It has been previously suggested as the function of a PointSensor (implemented in freeWrl). or a SpaceSensor (http://doc.instantreality.org/documentation/nodetype/SpaceSensor/#).

Motivation

Such a feature is clearly desirable, and would work internally as a regular PlaneSensor where the tracking plane is aligned with the current screent orientation. As in fact it is still a plane which is sensed and PlaneSensor features such as autooffset and min/maxPosition constraints (see below) apply, it would make sense to extend the PlaneSensor node with a new field rather than specifying a new node. Implementation is expected to be straightforward since it is just the orientation of the tracking plane which needs to be reconsidered. Such straightforward implementation has been demonstrated in two browser (x3dom and freeWrl). The proposed extension has the additional advantage that it will allow for other surface based dragging modes in the future.

Proposal

It is proposed to extend PlaneSensor with a 'planeOrientation' field with this signature:

SFString [out] planeOrientation "XY" ["XY", "screen" ..]

Other potential names for such a field include "trackingPlane" or "trackSurface" or "trackMode".

The default value of "XY" instructs the browser to use the XY plane as the tracking plane (except when in implicit line sensor mode). This default will make the field backward compatible. A value of "screen" instructs the browser to instead use a plane normal to the initial (and normally unchanged) viewing direction when dragging begins. The requirement that the tracking plane is anchored at the intersection of the initial bearing with the sensor's sibling geometry remains unchanged.

min/maxPosition

min/maxPosition currently is defined as:

"minPosition and maxPosition may be set to clamp translation_changed events to a range of values as measured from the origin of the Z=0 plane of the local sensor coordinate system..."

Since translation_changed would not be restricted to the Z=0 plane of the local sensor coordinate system, in case of a screen tracking plane orientation, the min/maxPosition fields need to be slightly reinterpreted for this case.

translation_changed would be restricted to the screen parallel tracking plane. So clamping of translation_changed should refer to this plane. The limits set by min/maxPosition refer to an X and Y axis, normally to those of the local sensor coordinate system. It is proposed that instead the limits would refer to axes parallel to screen edges. X would refer to the horizontal, and Y to the vertical screen edge, with the origin at the center of the screen. These are the natural directions for these axes, and are given by the view frustum, or clip space.

language suggestions

20.4.2 has this paragraph (http://www.web3d.org/documents/specifications/19775-1/V3.3/Part01/components/pointingsensor.html#PlaneSensor)

"Upon activation of the pointing device (e.g., mouse button down) while indicating the sensor's geometry, an isActive TRUE event is sent. Pointer motion is mapped into relative translation in the tracking plane, (a plane parallel to the local sensor coordinate system Z=0 plane and coincident with the initial point of intersection). For each subsequent movement of the bearing, a translation_changed event is output which corresponds to the sum of the relative translation from the original intersection point to the intersection point of the new bearing in the plane plus the offset value. The sign of the translation is defined by the Z=0 plane of the local sensor coordinate system. trackPoint_changed events reflect the unclamped drag position on the surface of this plane. When the pointing device is deactivated and autoOffset is TRUE, offset is set to the last translation_changed value and an offset_changed event is generated. More details are provided in 20.2.2 Drag sensors."

Here is a version which introduces planeOrientation="screen":

"Upon activation of the pointing device (e.g., mouse button down) while indicating the sensor's geometry, an isActive TRUE event is sent. Pointer motion is mapped into relative translation in the tracking plane. The tracking plane is defined by the planeOrientation field. If the planeOrientation field has a value of "XY", the tracking plane is a plane parallel to the local sensor coordinate system Z=0 plane. If it has a value of "screen", the tracking plane is a plane parallel to the current orientation of the screen or normal to the current viewing direction. In both cases, the tracking plane is coincident with the initial point of intersection.

For each subsequent movement of the bearing, a translation_changed event is output which corresponds to the sum of the relative translation from the original intersection point to the intersection point of the new bearing in the plane plus the offset value. The sign of the translation is defined by the Z=0 plane of the local sensor coordinate system, or the screen plane. ..."

The min/maxPosition paragraph currently reads:

"minPosition and maxPosition may be set to clamp translation_changed events to a range of values as measured from the origin of the Z=0 plane of the local sensor coordinate system. If the X or Y component of minPosition is greater than the corresponding component of maxPosition, translation_changed events are not clamped in that dimension. If the X or Y component of minPosition is equal to the corresponding component of maxPosition, that component is constrained to the given value. This technique provides a way to implement a line sensor that maps dragging motion into a translation in one dimension."

A version which defines min/maxPosition for screen mode could read:

"minPosition and maxPosition may be set to clamp translation_changed events to a range of values. These limits are measured from the origin of the Z=0 plane of the local sensor coordinate system if the planeOrientation field has a value of "XY". If the planeOrientation field has a value of "screen", the limits are measured from the origin of the tracking plane in the direction of the screen edges, horizontally for the x axis components, and vertically for the y axis components. ..."

<?xml version="1.0" encoding="UTF-8"?>
<X3D>
<Scene>
<Viewpoint description='A: default' />
<!--Viewpoint description='B-alongX' position='10 0 0' orientation='0 1 0 1.57' /-->
<Viewpoint description='C: alongY' position='0 10 0' orientation='1 0 0 -1.57' />
<Transform DEF='PlaneSensorContainer' translation='0 2 0'>
<PlaneSensor DEF='PS' planeOrientation='screen' minPosition='-10 -10' maxPosition='10 10' />
<Transform DEF='BoxMover'>
<Shape DEF='box'>
<Appearance><Material diffuseColor='1.0 0.0 0.0'></Material></Appearance>
<Box size='2 1 2'/>
</Shape>
</Transform>
<Transform DEF='trackPointMover'>
<Shape DEF='trackPoint'>
<Appearance>
<Material diffuseColor='0.0 1.0 0.0'></Material>
</Appearance>
<Sphere radius='0.2'/>
</Shape>
</Transform>
</Transform>
<ROUTE fromNode='PS' fromField='translation_changed' toNode='BoxMover' toField='set_translation'/>
<ROUTE fromNode='PS' fromField='trackPoint_changed' toNode='trackPointMover' toField='set_translation'/>
<Transform DEF='grid' rotation='1 0 0 1.57' translation='0 -1 0'>
<Shape >
<Appearance>
<PixelTexture image="2 2 4 0xffffff00 0x00000080 0x00000080 0xffffff00">
<TextureProperties magnificationfilter="NEAREST_PIXEL" />
</PixelTexture>
<TextureTransform scale="25 25"/>
</Appearance>
<Rectangle2D solid='false' size='50 50' />
</Shape>
</Transform>
</Scene>
</X3D>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<script type="text/javascript" src="https://www.x3dom.org/download/dev/x3dom-full.debug.js"> </script>
<script type="text/javascript" src="./PlaneSensor.js"> </script>
<link rel="stylesheet" type="text/css" href="https://www.x3dom.org/download/dev/x3dom.css">
<title>X3D PlaneSensor screen mode feature</title>
</head>
<body>
<div id="content">
<x3d width='600px' height='400px'>
<scene>
<inline url='AAA_PlaneSensorScreenMode.x3d'></inline>
</scene>
</x3d>
<button onclick='doclick()'><h2>click to change view</h2></button>
</div>
</body>
<script>
function doclick() {
var rt = document.querySelector("x3d").runtime;
rt.nextView();
document.querySelector('h2').textContent = rt.viewpoint()._vf.description;
}
</script>
</html>
/** @namespace x3dom.nodeTypes */
/*
* X3DOM JavaScript Library
* http://www.x3dom.org
*
* (C)2009 Fraunhofer IGD, Darmstadt, Germany
* Dual licensed under the MIT and GPL
*/
/**
* The plane sensor node translates drag gestures, performed with a pointing device like a mouse,
* into 3D transformations.
*/
x3dom.registerNodeType(
"PlaneSensor",
"PointingDeviceSensor",
defineClass(x3dom.nodeTypes.X3DDragSensorNode,
/**
* Constructor for PlaneSensor
* @constructs x3dom.nodeTypes.PlaneSensor
* @x3d 3.3
* @component PointingDeviceSensor
* @status experimental
* @extends x3dom.nodeTypes.X3DDragSensorNode
* @param {Object} [ctx=null] - context object, containing initial settings like namespace
* @classdesc PlaneSensor converts pointing device motion into 2D translation, parallel to the local Z=0 plane.
* Hint: You can constrain translation output to one axis by setting the respective minPosition and maxPosition
* members to equal values for that axis.
*/
function (ctx)
{
x3dom.nodeTypes.PlaneSensor.superClass.call(this, ctx);
//---------------------------------------
// FIELDS
//---------------------------------------
/**
* The local sensor coordinate system is created by additionally applying the axisRotation field value to
* the local coordinate system of the sensor node.
* @var {x3dom.fields.SFRotation} axisRotation
* @memberof x3dom.nodeTypes.PlaneSensor
* @initvalue 0,0,1,0
* @field x3d
* @instance
*/
this.addField_SFRotation(ctx, 'axisRotation', 0, 0, 1, 0);
/**
* The minPosition and maxPosition fields allow to constrain the 2D output of the plane sensor, along each
* 2D component. If the value of a component in maxPosition is smaller than the value of a component in
* minPosition, output is not constrained along the corresponding direction.
* @var {x3dom.fields.SFVec2f} minPosition
* @memberof x3dom.nodeTypes.PlaneSensor
* @initvalue 0,0
* @field x3d
* @instance
*/
this.addField_SFVec2f(ctx, 'minPosition', 0, 0);
/**
* The minPosition and maxPosition fields allow to constrain the 2D output of the plane sensor, along each
* 2D component. If the value of a component in maxPosition is smaller than the value of a component in
* minPosition, output is not constrained along the corresponding direction.
* @var {x3dom.fields.SFVec2f} maxPosition
* @memberof x3dom.nodeTypes.PlaneSensor
* @initvalue -1,-1
* @field x3d
* @instance
*/
this.addField_SFVec2f(ctx, 'maxPosition', -1, -1);
/**
* Offset value that is incorporated into the translation output of the sensor.
* This value is automatically updated if the value of the autoOffset field is 'true'.
* @var {x3dom.fields.SFVec3f} offset
* @memberof x3dom.nodeTypes.PlaneSensor
* @initvalue 0,0,0
* @field x3d
* @instance
*/
this.addField_SFVec3f(ctx, 'offset', 0, 0, 0);
/**
* Tracking plane orientation in local coordinate system.
* Valid values are "XY" and "screen". "screen" uses the current orientation of the screen.
* @var {x3dom.fields.SFString} plaenOrientation
* @memberof x3dom.nodeTypes.PlaneSensor
* @initvalue 'XY'
* @field x3dom
* @instance
*/
this.addField_SFString(ctx, 'planeOrientation', 'XY');
//route-able output fields
//this.addField_SFVec3f(ctx, 'translation_changed', 0, 0, 0);
//---------------------------------------
// PROPERTIES
//---------------------------------------
/**
*
* @type {x3dom.fields.Quaternion}
* @private
*/
//TODO: update on change
this._rotationMatrix = this._vf.axisRotation.toMatrix();
/**
* World-To-Local matrix for this node, including the axisRotation of the sensor
*/
this._worldToLocalMatrix = null;
/**
* Initial intersection point with the sensor's plane, at the time the sensor was activated
* @type {x3dom.fields.SFVec3f}
* @private
*/
this._initialPlaneIntersection = null;
/**
* Plane normal, computed on drag start and used during dragging to compute plane intersections
* @type {x3dom.fields.SFVec3f}
* @private
*/
this._planeNormal = null;
/**
* Current viewarea that is used for dragging, needed for ray setup to compute the plane intersection
*
* @type {x3dom.Viewarea}
* @private
*/
this._viewArea = null;
/**
* Current translation that is produced by this drag sensor
* @type {x3dom.fields.SFVec3f}
* @private
*/
this._currentTranslation = new x3dom.fields.SFVec3f(0.0, 0.0, 0.0);
//special LineSensor mode
this._lineModeAxis = null;
if ( this._vf.minPosition.x == this._vf.maxPosition.x )
this._lineModeAxis = new x3dom.fields.SFVec3f (0, 1, 0);
if ( this._vf.minPosition.y == this._vf.maxPosition.y )
this._lineModeAxis = new x3dom.fields.SFVec3f (1, 0, 0);
},
{
//----------------------------------------------------------------------------------------------------------------------
// PUBLIC FUNCTIONS
//----------------------------------------------------------------------------------------------------------------------
/**
* This function returns the parent transformation of this node, combined with its current axisRotation
* @overrides x3dom.nodeTypes.X3DPointingDeviceSensorNode.getCurrentTransform
*/
getCurrentTransform: function ()
{
var parentTransform = x3dom.nodeTypes.X3DDragSensorNode.prototype.getCurrentTransform.call(this);
return this._rotationMatrix.mult(parentTransform);
},
//----------------------------------------------------------------------------------------------------------------------
// PRIVATE FUNCTIONS
//----------------------------------------------------------------------------------------------------------------------
/**
* @overrides x3dom.nodeTypes.X3DDragSensorNode.prototype._startDragging
* @private
*/
_startDragging: function(viewarea, x, y, wx, wy, wz)
{
x3dom.nodeTypes.X3DDragSensorNode.prototype._startDragging.call(this, viewarea, x, y, wx, wy, wz);
this._viewArea = viewarea;
//save viewMatrix
this._viewMat = this._viewArea.getViewMatrix();
this._viewMatInv = this._viewMat.inverse();
this._currentTranslation = new x3dom.fields.SFVec3f(0.0, 0.0, 0.0).add(this._vf.offset);
//TODO: handle multi-path nodes
//get model matrix for this node, combined with the axis rotation
this._localToWorldMatrix = this.getCurrentTransform();
this._worldToLocalMatrix = this._localToWorldMatrix.inverse();
//remember initial point of intersection with the plane, transform it to local sensor coordinates
this._initialPlaneIntersection = this._worldToLocalMatrix.multMatrixPnt(new x3dom.fields.SFVec3f(wx, wy, wz));
//compute plane normal in local coordinates
this._planeNormal = new x3dom.fields.SFVec3f(0.0, 0.0, 1.0);
var viewRay;
//handle screen mode
if (this._vf.planeOrientation == 'screen') {
viewRay = viewarea.calcViewRay(viewarea._width/2, viewarea._height/2);
this._planeNormal = this._worldToLocalMatrix.multMatrixVec (viewRay.dir.normalize());
}
//handle LineSensor mode robustly
else if ( this._lineModeAxis ) {
viewRay = viewarea.calcViewRay(x, y);
//viewRay.pos = this._worldToLocalMatrix.multMatrixPnt (viewRay.pos);
var viewDir = this._worldToLocalMatrix.multMatrixVec (viewRay.dir.normalize());
var axis = this._lineModeAxis;
//generate suitable intersection plane even if on edge view;
this._planeNormal = axis.cross ( axis.cross (viewDir) ).normalize();
}
},
//----------------------------------------------------------------------------------------------------------------------
/**
* @overrides x3dom.nodeTypes.X3DDragSensorNode._process2DDrag
* @private
*/
_process2DDrag: function(x, y, dx, dy)
{
x3dom.nodeTypes.X3DDragSensorNode.prototype._process2DDrag.call(this, x, y, dx, dy);
var intersectionPoint = null;
var minPos, maxPos;
if (this._initialPlaneIntersection)
{
//compute point of intersection with the plane
var viewRay = this._viewArea.calcViewRay(x, y);
//transform the world coordinates, used for the ray, to local sensor coordinates
viewRay.pos = this._worldToLocalMatrix.multMatrixPnt(viewRay.pos);
viewRay.dir = this._worldToLocalMatrix.multMatrixVec(viewRay.dir.normalize()).normalize();
if ( Math.abs(this._planeNormal.dot(viewRay.dir)) < 0.01 ) return;
intersectionPoint = viewRay.intersectPlane(this._initialPlaneIntersection, this._planeNormal);
//allow interaction from both sides of the plane
if (!intersectionPoint)
{
intersectionPoint = viewRay.intersectPlane(this._initialPlaneIntersection, this._planeNormal.negate());
}
if (intersectionPoint)
{
//compute difference between new point of intersection and initial point
_translation = intersectionPoint.subtract(this._initialPlaneIntersection);
this._currentTranslation = _translation.add(this._vf.offset);
//clamp translation components, if desired
minPos = this._vf.minPosition;
maxPos = this._vf.maxPosition;
if (this._vf.planeOrientation == 'screen')
{
if (minPos.x <= maxPos.x || minPos.y <= maxPos.y) // proejct/reproject only if necessary
{
//project currentTranslation into screen plane
var screenTranslation = this._localToWorldMatrix.multMatrixVec(this._currentTranslation);
screenTranslation = this._viewMat.multMatrixVec(screenTranslation);
_clampTranslation (screenTranslation, minPos, maxPos);
// and reproject
screenTranslation = this._viewMatInv.multMatrixVec(screenTranslation);
this._currentTranslation = this._worldToLocalMatrix.multMatrixVec(screenTranslation);
}
}
else {
//recalc track point for line sensor
if (this._lineModeAxis) {
_translation.z = 0;
intersectionPoint = this._initialPlaneIntersection.add(_translation);
//intersectionPoint.z = this._initialPlaneIntersection.z;
if (this._lineModeAxis.x == 0) {
intersectionPoint.x = minPos.x;
}
else {
intersectionPoint.y = minPos.y;
}
}
//normally 0 but force for LineSensor plane
this._currentTranslation.z = 0;
_clampTranslation (this._currentTranslation, minPos, maxPos);
}
//output trackpoint_changed event
this.postMessage('trackPoint_changed', intersectionPoint);
//output translation_changed event
this.postMessage('translation_changed', x3dom.fields.SFVec3f.copy(this._currentTranslation));//this._rotationMatrix.multMatrixPnt(this._currentTranslation));//
}
}
//helper
function _clampTranslation (translation, minPos, maxPos)
{
if (minPos.x <= maxPos.x)
{
translation.x = Math.min(translation.x, maxPos.x);
translation.x = Math.max(translation.x, minPos.x);
}
if (minPos.y <= maxPos.y)
{
translation.y = Math.min(translation.y, maxPos.y);
translation.y = Math.max(translation.y, minPos.y);
}
}
},
//----------------------------------------------------------------------------------------------------------------------
/**
* @overrides x3dom.nodeTypes.X3DDragSensorNode._stopDragging
* @private
*/
_stopDragging: function()
{
x3dom.nodeTypes.X3DDragSensorNode.prototype._stopDragging.call(this);
if (this._vf.autoOffset)
{
this._vf.offset = x3dom.fields.SFVec3f.copy(this._currentTranslation);
this.postMessage('offset_changed', this._vf.offset);
}
}
//----------------------------------------------------------------------------------------------------------------------
}
)
);
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment