UGUI的点击事件机制
来源:互联网 发布:python 桌面开发 编辑:程序博客网 时间:2024/05/21 20:22
UGUI的点击事件机制
0x01点击的出发点
反编译EventSystem大概获得一个这样的流程图。为UGUI中各种点击及拖动响应事件产生的一个大概的流程图。其中黄色部分为重点的分析区域。
反编译 PointerInputModule获得实现,其中
protected PointerEventData GetTouchPointerEventData(Touch input, out bool pressed, out bool released) { PointerEventData data; bool pointerData = this.GetPointerData(input.fingerId, out data, true); data.Reset(); pressed = pointerData || input.phase == TouchPhase.Began; released = input.phase == TouchPhase.Canceled || input.phase == TouchPhase.Ended; if (pointerData) data.position = input.position; data.delta = !pressed ? input.position - data.position : Vector2.zero; data.position = input.position; data.button = PointerEventData.InputButton.Left; this.eventSystem.RaycastAll(data, this.m_RaycastResultCache); RaycastResult firstRaycast = BaseInputModule.FindFirstRaycast(this.m_RaycastResultCache); data.pointerCurrentRaycast = firstRaycast; this.m_RaycastResultCache.Clear(); return data; }
这个方法的调用主要是在ProcessTouchEvents中,对每个touch点进行处理。这里是由Eventsystem的Update调用到Process然后调过来的。是每一帧轮询的。
这里面功能的主要实现是这几句
this.eventSystem.RaycastAll(data, this.m_RaycastResultCache); RaycastResult firstRaycast = BaseInputModule.FindFirstRaycast(this.m_RaycastResultCache); data.pointerCurrentRaycast = firstRaycast;
先用this.eventSystem.RaycastAll获得一个结果队列,然后拿到首先响应的对象。
反编译BaseInputModule
protected static RaycastResult FindFirstRaycast(List<RaycastResult> candidates) { for (int index = 0; index < candidates.Count; ++index) { if (!((UnityEngine.Object) candidates[index].gameObject == (UnityEngine.Object) null)) return candidates[index]; } return new RaycastResult(); }
这里其实就是拿第一个出来。
0x02点击触发队列的生成
EventSystem中的实现
public void RaycastAll(PointerEventData eventData, List<RaycastResult> raycastResults) { raycastResults.Clear(); List<BaseRaycaster> raycasters = RaycasterManager.GetRaycasters(); for (int index = 0; index < raycasters.Count; ++index) { BaseRaycaster baseRaycaster = raycasters[index]; if (!((UnityEngine.Object) baseRaycaster == (UnityEngine.Object) null) && baseRaycaster.IsActive()) baseRaycaster.Raycast(eventData, raycastResults); } raycastResults.Sort(EventSystem.s_RaycastComparer); }private static int RaycastComparer(RaycastResult lhs, RaycastResult rhs) { if ((UnityEngine.Object) lhs.module != (UnityEngine.Object) rhs.module) { if ((UnityEngine.Object) lhs.module.eventCamera != (UnityEngine.Object) null && (UnityEngine.Object) rhs.module.eventCamera != (UnityEngine.Object) null && (double) lhs.module.eventCamera.depth != (double) rhs.module.eventCamera.depth) { if ((double) lhs.module.eventCamera.depth < (double) rhs.module.eventCamera.depth) return 1; return (double) lhs.module.eventCamera.depth == (double) rhs.module.eventCamera.depth ? 0 : -1; } if (lhs.module.sortOrderPriority != rhs.module.sortOrderPriority) return rhs.module.sortOrderPriority.CompareTo(lhs.module.sortOrderPriority); if (lhs.module.renderOrderPriority != rhs.module.renderOrderPriority) return rhs.module.renderOrderPriority.CompareTo(lhs.module.renderOrderPriority); } if (lhs.sortingLayer != rhs.sortingLayer) return SortingLayer.GetLayerValueFromID(rhs.sortingLayer).CompareTo(SortingLayer.GetLayerValueFromID(lhs.sortingLayer)); if (lhs.sortingOrder != rhs.sortingOrder) return rhs.sortingOrder.CompareTo(lhs.sortingOrder); if (lhs.depth != rhs.depth) return rhs.depth.CompareTo(lhs.depth); if ((double) lhs.distance != (double) rhs.distance) return lhs.distance.CompareTo(rhs.distance); return lhs.index.CompareTo(rhs.index); }
也就是说我去Raycast所有对象的过程,其实是从RaycasterManager中获得注册过的Raycaster并且逐个进行判断是否可以被点到。整个过程是一个循环搜索,并不考虑遮挡关系等,生成一组触控结果,然后根据相机的depth,点击的sortingLayer,sortingOrder,depth,distance 等对碰撞结果信息进行排序。排序后的碰撞结果就已经有了优先顺序,比如A控件遮挡B控件的操作,那么结果队列里就是先A后B。
你会说,我靠这么多控件的结果排序么。但是别着急,这里的顶层结果排序并不是对所有控件排序,至于为什么,继续往后看。
反编译其中的BaseRaycaster
public abstract void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList);
发现其中的Raycast是一个虚方法,并无实现。而真正上用的其实是GraphicRaycast。创建一个Canvas,看看是不是自动生成了它!没错,这个东西就是跟画布息息相关。结合上面提到的RaycastAll中的排序,其实是每一个画布下都会产生的点击结果队列,依次被添加到了这个总的list上然后排序。
反编译GraphicRaycast,可以看到实现。代码很长淡定的一行行看下去。
public override void Raycast(PointerEventData eventData, List<RaycastResult> resultAppendList) { if ((UnityEngine.Object) this.canvas == (UnityEngine.Object) null) return; Vector3 position = Display.RelativeMouseAt((Vector3) eventData.position); int targetDisplay = this.canvas.targetDisplay; if ((double) position.z != (double) targetDisplay) return; if ((double) position.z == 0.0) position = (Vector3) eventData.position; Vector2 vector2; if ((UnityEngine.Object) this.eventCamera == (UnityEngine.Object) null) { float num1 = (float) Screen.width; float num2 = (float) Screen.height; if (targetDisplay > 0 && targetDisplay < Display.displays.Length) { num1 = (float) Display.displays[targetDisplay].systemWidth; num2 = (float) Display.displays[targetDisplay].systemHeight; } vector2 = new Vector2(position.x / num1, position.y / num2); } else vector2 = (Vector2) this.eventCamera.ScreenToViewportPoint(position); if ((double) vector2.x < 0.0 || (double) vector2.x > 1.0 || ((double) vector2.y < 0.0 || (double) vector2.y > 1.0)) return; float num3 = float.MaxValue; Ray r = new Ray(); if ((UnityEngine.Object) this.eventCamera != (UnityEngine.Object) null) r = this.eventCamera.ScreenPointToRay(position); if (this.canvas.renderMode != RenderMode.ScreenSpaceOverlay && this.blockingObjects != GraphicRaycaster.BlockingObjects.None) { float f = 100f; if ((UnityEngine.Object) this.eventCamera != (UnityEngine.Object) null) f = this.eventCamera.farClipPlane - this.eventCamera.nearClipPlane; RaycastHit hit; if ((this.blockingObjects == GraphicRaycaster.BlockingObjects.ThreeD || this.blockingObjects == GraphicRaycaster.BlockingObjects.All) && (ReflectionMethodsCache.Singleton.raycast3D != null && ReflectionMethodsCache.Singleton.raycast3D(r, out hit, f, (int) this.m_BlockingMask))) num3 = hit.distance; if ((this.blockingObjects == GraphicRaycaster.BlockingObjects.TwoD || this.blockingObjects == GraphicRaycaster.BlockingObjects.All) && ReflectionMethodsCache.Singleton.raycast2D != null) { RaycastHit2D raycastHit2D = ReflectionMethodsCache.Singleton.raycast2D((Vector2) r.origin, (Vector2) r.direction, f, (int) this.m_BlockingMask); if ((bool) ((UnityEngine.Object) raycastHit2D.collider)) num3 = raycastHit2D.fraction * f; } } this.m_RaycastResults.Clear(); GraphicRaycaster.Raycast(this.canvas, this.eventCamera, (Vector2) position, this.m_RaycastResults); for (int index = 0; index < this.m_RaycastResults.Count; ++index) { GameObject gameObject = this.m_RaycastResults[index].gameObject; bool flag = true; if (this.ignoreReversedGraphics) flag = !((UnityEngine.Object) this.eventCamera == (UnityEngine.Object) null) ? (double) Vector3.Dot(this.eventCamera.transform.rotation * Vector3.forward, gameObject.transform.rotation * Vector3.forward) > 0.0 : (double) Vector3.Dot(Vector3.forward, gameObject.transform.rotation * Vector3.forward) > 0.0; if (flag) { float num1; if ((UnityEngine.Object) this.eventCamera == (UnityEngine.Object) null || this.canvas.renderMode == RenderMode.ScreenSpaceOverlay) { num1 = 0.0f; } else { Transform transform = gameObject.transform; Vector3 forward = transform.forward; num1 = Vector3.Dot(forward, transform.position - r.origin) / Vector3.Dot(forward, r.direction); if ((double) num1 < 0.0) continue; } if ((double) num1 < (double) num3) { RaycastResult raycastResult = new RaycastResult() { gameObject = gameObject, module = (BaseRaycaster) this, distance = num1, screenPosition = (Vector2) position, index = (float) resultAppendList.Count, depth = this.m_RaycastResults[index].depth, sortingLayer = this.canvas.sortingLayerID, sortingOrder = this.canvas.sortingOrder }; resultAppendList.Add(raycastResult); } } } } private static void Raycast(Canvas canvas, Camera eventCamera, Vector2 pointerPosition, List<Graphic> results) { IList<Graphic> graphicsForCanvas = GraphicRegistry.GetGraphicsForCanvas(canvas); for (int index = 0; index < graphicsForCanvas.Count; ++index) { Graphic graphic = graphicsForCanvas[index]; if (graphic.depth != -1 && graphic.raycastTarget && (RectTransformUtility.RectangleContainsScreenPoint(graphic.rectTransform, pointerPosition, eventCamera) && graphic.Raycast(pointerPosition, eventCamera))) GraphicRaycaster.s_SortedGraphics.Add(graphic); } GraphicRaycaster.s_SortedGraphics.Sort((Comparison<Graphic>) ((g1, g2) => g2.depth.CompareTo(g1.depth))); for (int index = 0; index < GraphicRaycaster.s_SortedGraphics.Count; ++index) results.Add(GraphicRaycaster.s_SortedGraphics[index]); GraphicRaycaster.s_SortedGraphics.Clear(); }
首先可以看到在this.canvas.renderMode不通的时候使用了不通的方式来进行判定。
随后 resultAppendList.Add(raycastResult);这一块就是加入最终的结果列表。在这之前它判断了(double) num1 < (double) num3)才会写入这个队列。其实就是从下面这个静态方法产生的队列中筛选出结果放入最终队列。这个num3是被其他什么东西挡住了的深度,使用射线检测来检测的。如果Canvas是SpaceOverlayer的话这里直接num3变为最大浮点数。那么下面这个测试就都能通过,否则就是没被挡住的控件才能通过。这块其实就是空间中非UI的物体对UI遮挡事件屏蔽的实现。
然后看静态方法的实现,这里实现了另一层就是每个控件是否能被点击到的操作,如果能够点击到就加入队列,最后排下序,最后再加入到resultAppendList中。
这里的每个控件是否能被点击到的操作判断
首先graphic.raycastTarget这个就是我们通常在Unity中设置控件点击是否可用的那个checkbox的值。
然后graphic是会判断点击点是否是在矩形区域内部,也就是说你只要超过这个区域,就一定是没有点击事件的。
最后是调用graphic自己的Raycast判断是否能够被点到。
三者全部满足会进入后续的点击排序。
反编译Graphics看到
public virtual bool Raycast(Vector2 sp, Camera eventCamera) { if (!this.isActiveAndEnabled) return false; Transform transform = this.transform; List<Component> componentList = ListPool<Component>.Get(); bool flag1 = false; bool flag2 = true; for (; (UnityEngine.Object) transform != (UnityEngine.Object) null; transform = !flag2 ? (Transform) null : transform.parent) { transform.GetComponents<Component>(componentList); for (int index = 0; index < componentList.Count; ++index) { Canvas canvas = componentList[index] as Canvas; if ((UnityEngine.Object) canvas != (UnityEngine.Object) null && canvas.overrideSorting) flag2 = false; ICanvasRaycastFilter canvasRaycastFilter = componentList[index] as ICanvasRaycastFilter; if (canvasRaycastFilter != null) { bool flag3 = true; CanvasGroup canvasGroup = componentList[index] as CanvasGroup; if ((UnityEngine.Object) canvasGroup != (UnityEngine.Object) null) { if (!flag1 && canvasGroup.ignoreParentGroups) { flag1 = true; flag3 = canvasRaycastFilter.IsRaycastLocationValid(sp, eventCamera); } else if (!flag1) flag3 = canvasRaycastFilter.IsRaycastLocationValid(sp, eventCamera); } else flag3 = canvasRaycastFilter.IsRaycastLocationValid(sp, eventCamera); if (!flag3) { ListPool<Component>.Release(componentList); return false; } } } } ListPool<Component>.Release(componentList); return true; }
也就是说这个绘制的Graphic
上你如果实现过ICanvasRaycastFilter
这个接口那么就会调用这个接口的IsRaycastLocationValid
来实现点击是否触发的判定。Image控件用到了次功能来实现Alpha的点击穿透。
0x03Image的Alpah穿透的实现
反编译Image可以看到
public class Image : MaskableGraphic, ISerializationCallbackReceiver, ILayoutElement, ICanvasRaycastFilter
其中的最后一个接口ICanvasRaycastFilter
就是画布点击遮罩的接口
继续反编译ICanvasRaycastFilter
看到
namespace UnityEngine{ public interface ICanvasRaycastFilter { bool IsRaycastLocationValid(Vector2 sp, Camera eventCamera); }}
其中IsRaycastLocationValid就是判断当前点击点是否能被Raycast的实现
在Image的反编译代码中有其实现
public virtual bool IsRaycastLocationValid(Vector2 screenPoint, Camera eventCamera) { if ((double) this.alphaHitTestMinimumThreshold <= 0.0) return true; if ((double) this.alphaHitTestMinimumThreshold > 1.0) return false; if ((UnityEngine.Object) this.activeSprite == (UnityEngine.Object) null) return true; Vector2 localPoint; if (!RectTransformUtility.ScreenPointToLocalPointInRectangle(this.rectTransform, screenPoint, eventCamera, out localPoint)) return false; Rect pixelAdjustedRect = this.GetPixelAdjustedRect(); localPoint.x += this.rectTransform.pivot.x * pixelAdjustedRect.width; localPoint.y += this.rectTransform.pivot.y * pixelAdjustedRect.height; localPoint = this.MapCoordinate(localPoint, pixelAdjustedRect); Rect textureRect = this.activeSprite.textureRect; Vector2 vector2 = new Vector2(localPoint.x / textureRect.width, localPoint.y / textureRect.height); float u = Mathf.Lerp(textureRect.x, textureRect.xMax, vector2.x) / (float) this.activeSprite.texture.width; float v = Mathf.Lerp(textureRect.y, textureRect.yMax, vector2.y) / (float) this.activeSprite.texture.height; try { return (double) this.activeSprite.texture.GetPixelBilinear(u, v).a >= (double) this.alphaHitTestMinimumThreshold; } catch (UnityException ex) { Debug.LogError((object) ("Using alphaHitTestMinimumThreshold greater than 0 on Image whose sprite texture cannot be read. " + ex.Message + " Also make sure to disable sprite packing for this sprite."), (UnityEngine.Object) this); return true; } }
可以发现UGUI的Image控件已经提供了透明区域是否可以点穿的功能实现,这里的方式是将点击点对应到图片的UV坐标,获取UV坐标的像素alpha的值。这里可以通过外部设置image。alphaHitTestMinimumThreshold来实现alpha击穿的控制。
如果我们自己做了一个继承自Image的控件,那么它的点击状态就是一个控件的Rect区域,跟你控件里自己画的的mesh没有关系。 这时就需要重写此方法,覆盖之,然后实现自己的点击判定。
0x04整个的大概总结
经过上面的分析过程。大概可以看出Unity的事件响应式如何一步步完成的。每一个tick的每一个点击点,首先判断对Canvas上的所有Graphic进行筛选可点击的控件出来,排序,顺序是哪个控件可以先被点到。然后把每个canvas的可点击控件放在一起进行排序。顺序是哪个canvas先被点到。最终再进行轮询派发。
0x05优化
了解了整个过程,可以针对的想一想优化。比如如果一个画布下所有控件都没有响应,那么直接关闭掉GraphicsRayCast要比设置每个控件上的RayCastIgnore效率好的多,诸如此类。
- UGUI的点击事件机制
- UGUI的事件机制
- 封装UGUI的点击事件
- UGUI的事件监听机制
- UGUI的点击事件的学习
- Unity 4.6 uGUI的点击事件
- UGUI点击事件的简单封装
- UGUI点击响应事件
- UGUI点击事件
- UGUI-仿照NGUI的事件监听机制
- UGUI事件机制
- UGUI点击事件(多个)
- UGUI点击事件(委托)
- UGUI点击事件的几种代码添加方式
- unity ugui对场景中鼠标点击事件的遮挡
- 重写UGUI按钮组件button的点击、选中事件
- 【UGUI】UGUI的事件监听
- 【Unity3D ugui】事件监听机制
- JDK 5.0 中的ReentrantLock锁
- MVC Ajax复杂对象封装
- 2017年9月Qt计划
- HDU2612 Find a way —— BFS
- Ubuntu 16.0.4 配置Caffe 图文记录
- UGUI的点击事件机制
- 无符号 运算中的细节问题
- JAVA Double 型保留指定位
- hibernaer--(fetch=FetchType.EAGER带来的问题)级联删除
- 动态插入DOM元素并执行脚本
- Kafka常用命令使用说明
- IR2104与IR2103区别
- Git常用命令
- doc操作oracle数据库建立用户,导入表等操作