Let’s aim the shot. i’ll need a way to see where I’m aiming, and to use the orientation to determine the shot’s velocity. After that, I’ll need to derive the post-bounce velocity, which is also hard-coded at the moment.
For simpliicity, I will keep all this behaviour on the shot object itself, but later I will want to move most of it to a spawner. To prepare for that, I’ll start a new MonoBehaviour
. This should control a line that start from the shot, goes toward or through the mouse pointer, and stops at the end of the screen or at the block. Later on, I’ll porbably want to disable the line if it points above the shot.
So how do I draw a line? Well, i right-clicked my GameObject
to see if any built-ins will be useful, and found Effects > Line
. There’s also an effect called Trail
, which I’ll want to remember.
I tried changing the line’s color, and got this weird gradient editor. After some clicking around, I realized the bottom two markers control the color, and the top two markers control alpha, which I assume is transparency. I may as well give it some vaporwavey colors for a bit of flair while I’m working.
The width
panel of the line renderer
component is in two dimensions, and right-clicking gives “add key”, so I guess you can interpolate the width across the span of the line. I’ll leave that alone for now.
Under Scene Tools
I was able to add a point to preview what I’m doing. The Mouse Position
mode worked as expected, but I didn’t understand how to get the Physics Raycast
one to work.
I increased End Cap Vertices
to make it a bit prettier, but it can still look wonky when zooming in or out in the editor. Luckily I will only have one zoom level in the game. It turns out that when I added a vertex, I may have accidentally added another one at the starting point with z = 1
. The Positions
dropdown is more reliable than trying to click my way into the right line. Getting rid of the extra point fixed the weird zoom.
Now for my script to get the LineRenderer
component, there’s no GetComponentInChild
, just GetComponentInChildren
. Because of how Unity exposes fields in the inspector panel, there must be a public field called positions
.
public class Shooter : MonoBehaviour
{
LineRenderer aimLine;
void Start()
{
aimLine = GetComponentInChildren<LineRenderer>();
}
void Update()
{
aimLine.positions[0] = gameObject.transform.position;
aimLine.positions[1] = ???
}
}
Now I have to do geometry, based on the mouse position on screen, in world space. Of course there are endless forum posts to help me with the mouse position. Now to get the end point, I need to do a raycast. The documentation for Physics.Raycast
is busted. The very first example uses different parameters than you see in the function declaration. You have to scroll down to see the function signature they used is the next declaration, and that declaration pretends none of its parameters are optional, even though its own example omits them. Rude! I sent feedback so you, the reader, may not see this issue.
The out parameter hitInfo
has a point
field, which will be the endpoint of my line. All that’s left is to derive the direction parameter. If direction
is just a vector that points in the desired direction, all I have to do is subtract the pointer position from my line’s first point.
So this ought to work, right?
public class Shooter : MonoBehaviour
{
LineRenderer aimLine;
void Start()
{
aimLine = GetComponentInChildren<LineRenderer>();
}
void Update()
{
Vector3 mouseWorldPos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
mouseWorldPos.z = 0f;
Vector3 launchPos = gameObject.transform.position;
Vector3 direction = mouseWorldPos - launchPos;
RaycastHit hitInfo;
Physics.Raycast(launchPos, direction, out hitInfo);
aimLine.positions[0] = launchPos;
// Stop the line at the first wall or block it touches.
aimLine.positions[1] = hitInfo.point;
}
}
No, I’m told LineRenderer
does not contain a definition for positions
. The documentation shows that you have to call SetPosition
or SetPositions
. This becomes:
aimLine.SetPositions(new[] {launchPos, hitInfo.point});
Now I add the script to my “Shot” object, and zero my initial velocity from the “Shot” script. Now the line points upward, but still doesn’t move. Time for some print debugging, the standard Debug.Log("vairable name ", variable value)
, although Debug.Log
doesn’t accept Vector3
for the second argument. Perhaps there’s a ToString
function I can use? There is, but the second argument to Debug.Log
is expected to be a UnityEngine.Object
, so I have to concatenate with +
, for convenience.
After comparing the printed values of the launch position and the example positions of the line in the scene view, I realize that the line’s positions are in local space, that is, relative to the position of the launch point, rather than world space. I have to subtract the shooter’s position to get the points I want, or set LineRenderer.useWorldSpace
to true
.
As a result, the line is longer and pointing down now, but still not changing as I move the mouse pointer. I realize I didn’t define any behavior for the line when the Raycast
doesn’t hit anything, but the line also doesn’t move when it would. Perhaps these Physics
calls only involve 3D colliders, and I have to rely on Physics2D
to detect my BoxCollider2D
. This involves a simpler API. The first declaration of Physics2D.Raycast
gives a RaycastHit2D
as the return value.
With a little footwork to move between vector3
and Vector2
, I have a new function body.
void Update()
{
Vector3 mouseWorldPos = Camera.main.ScreenToWorldPoint(Input.mousePosition);
var aimToward = new Vector2(mouseWorldPos.x, mouseWorldPos.y);
Debug.Log("Mouse position " + aimToward.ToString());
var launchPos = new Vector2(gameObject.transform.position.x, gameObject.transform.position.y);
Debug.Log("Shooter position " + launchPos.ToString());
Vector2 direction = aimToward - launchPos;
Debug.Log("Direction " + direction.ToString());
// Stop the line at the first wall or block it touches.
RaycastHit2D hitInfo = Physics2D.Raycast(launchPos, direction);
aimLine.SetPositions(new[] {new Vector3(launchPos.x, launchPos.y), new Vector3(hitInfo.point.x, hitInfo.point.y)});
}
To keep things simple, I’ll also add some walls so that there’s always a hit.
However, now the line is stuck inside the shot shooter. Apparently, Unlike Physics.Raycast
, the Physics2D.Raycast
“will also detect Collider(s) at the start of the ray.” The most obvious fix is to just disable the shot’s collider until shooting it, or, as I had planned, to make the shooter an independent object that spawns the shot(s). I’ll just remove the physics components and the Shot
script for now, and add a Shot
prefab to spawn later, once I am ready to detect a click (or tap).
Woo hoo! The gradient effect is a little jarring when the line shrinks, but no matter. Now to launch things in the aimed direction. I’ll want to divide the Shooter.Update
function into parts, and store direction
as a class member. The current function body will go into ShowAimLine
.
To detect a click impulse, I need Input.GetMouseButtonDown
, which sadly does not indicate an enum
for the mouse button ids. Later on I’ll abstract this out to something that determines whether to use touchscreen or mouse inputs. Once there is a click, I call Shoot
, which spawns a shot and gives it an appropriate velocity.
void Shoot() {
GameObject newShot = Instantiate(shotPrefab, gameObject.transform);
newShot.GetComponent<Shot>().initialVelocity = direction.normalized * shotSpeed;
}
void Update() {
ShowAimLine();
if (Input.GetMouseButtonDown(0)) {
Shoot();
}
}
After making the prefab and dragging it into the Shot Prefab
editor field and setting Shot Speed
to 20, this should result in something that fires one shot per click in the aimed direction. They will all bounce in the same direction, but I should be able to use ten of them to destroy my block.
I was able to destroy the block, but I couldn’t see any of the shots. is 20 too fast? Oh, when I set the scale based on what I saw in the scene view, I gave it 0.1 for x
and y
, but it spawns as a child of the shooter, which also has 0.1 scale. Therefore, it could be too small to see.
Here’s something silly:
The shots collide with each other, so I need to put them on a different layer
so that they “see” the block for collisions, but ignore each other. At the top of the shot object’s Inspector
, I add a new layer called Shots
and assign it to that. Then under the Rigidbody2D
component, I add Shots
to the Exclude Layers
. And it just works!
Now to put a bow on this chapter, I’d like to complete the bounce simulation by computing the new velocity on a hit. I can do that by reflecting the velocity vector about the surface normal of the object the shot hits. Collision2D
doesn’t provide a surface normal, but Physics2D.Raycast
does. Therefore, I can know the first “Bounce velocity” on spawn by casting a ray in the direction of the initial velocity. Then I can just compute the next “Bounce Velocity” on each successive bounce, and whenever the environment changes (i.e. a block vanishes).
Ah, but that pesky 2d raycast will hit the shot’s collider from the inside. To get around it, I can use the layerMask
parameter. The default value is DefaultRaycastLayers
, but its value isn’t given. I know from expereince
that a layer mask in this context is a bit field, where each bit in the int
value represents one layer. The values of the static constants DefaultRaycastLayers
, IgnoreRaycastLayer
, and AllLayers
are not given, but I will assume setting a bit (making it 1 not 0) will include the corresponding layer in the cast. Therefore, I want to unset the bit for the Shots
layer, at index 6. If I find an interface for getting layer indices by name, I should use that, but today, I will hard-code 6
.
Anyway, Unity offers a handy function Vector2.Reflect
, which
Reflects a vector off the vector defined by a normal.
I assume they mean “off the Surface defined by a normal,” which is exactly what I need. The new shot
class becomes:
public class Shot : MonoBehaviour
{
public Vector2 initialVelocity;
public Vector2 bounceVelocity;
private Rigidbody2D rigidBody;
// This assumes 6 is the index of the "Shots" layer.
static int bounceCastLayers = Physics2D.DefaultRaycastLayers ^ (1 << 6);
void PredictBounceVelocity() {
// Identify the first surface this shot will collide with.
RaycastHit2D hitInfo = Physics2D.Raycast(gameObject.transform.position, rigidBody.velocity, Mathf.Infinity, bounceCastLayers);
// By reflecting across the surface normal, the angle of reflection should mirror the angle of incidence.
bounceVelocity = Vector2.Reflect(rigidBody.velocity, hitInfo.normal);
}
void Start()
{
rigidBody = gameObject.GetComponent<Rigidbody2D>();
rigidBody.velocity = initialVelocity;
PredictBounceVelocity();
}
void OnCollisionEnter2D(Collision2D other)
{
bool destroyed = other.gameObject.GetComponent<Destructible>()?.TakeDamage() ?? false;
if (destroyed) {
Debug.Log("Destroyed a block!");
}
rigidBody.velocity = bounceVelocity;
PredictBounceVelocity();
}
}
But the resulting velocity isn’t always what I expected.
The bounce consistently works as expected against the right wall, and against the I began to wonder if the surface normals had fancy orientations like I remember seeing in one of Freya Holmer’s demos, but The reality is much simpler. The Rigidbody2D
had collision detection set to Discrete
, which seems to allow the shot to clip into the block a little bit. Setting it to Continuous
and turning on Interpolate
for good measure (mostly) fixed it, but why? the raycast could be hitting the nearby surface from inside, but neither an up nor down surface normal would Reflect
a vector by flipping its x
direction. However, A left or right normal would do that, so perhaps the surface normal of a raycast collision from inside the collider is undefined, or perhaps it has a default value.
Therefore, I have two options: (A) using a raycast that returns all collisions (not just the first) and discarding any whose distance is too small, or (B) pushing the starting point of the ray “forward” a bit. (A) is cleaner, (B) is cheaper. Both options may have flaws. In a tight corner, where the shot is going to bounce off of one surface immediately after bouncing off the other, (a) may reject it as too close, and (B) may start the raycast inside the collider, repeating the original problem. But (A) could still work if we know that an inside raycast’s hit is exactly 0. Print debugging provided evidence that it is, so I’ll use RaycastAll
.
void PredictBounceVelocity() {
// Identify the first surface this shot will collide with.
RaycastHit2D[] hits = Physics2D.RaycastAll(gameObject.transform.position, rigidBody.velocity, Mathf.Infinity, bounceCastLayers);
Debug.Assert(hits.Length > 0, "The shot should always see at least a wall.");
Vector2 surfaceNormal = hits[0].normal;
// Discard
if (hits[0].distance == 0.0f) {
Debug.Assert(hits.Length > 1, "A bounced shot looking out from.");
surfaceNormal = hits[1].normal;
}
// By reflecting across the surface normal, the angle of reflection should mirror the angle of incidence.
bounceVelocity = Vector2.Reflect(rigidBody.velocity, surfaceNormal);
}
It might have helped, but in some cases, when the shot hits the block near its corner, it may touch More than one edge, with competing surface normals. Could it see two collisions on the same block? I don’t really know, but exploration leads me to this Edge Radius
field in Box Collider 2D
.
Set a value that forms a radius around the edge of the Collider. This results in a larger Collider 2D with rounded convex corners.
That sounds great, but when I give it a value, even the maximum value of 1000000, I don’t see the rounding in the scene view, regardless of whether Edit Collider
is on. Well, i’ll start with 1, and if that number is too big, I’ll find out quickly. Ah. The first value I tried was 20, and the visualization was simply too spacious to appear in frame. A more appropriate choice for my scale is 0.05, closer to the scale of the shots themselves. Unfortunately, the problem persists, even with the radius set to 0.1.
On a lark, I try lowering the frame rate, which is about a thousand on my desktop. No luck. Perhaps I can tweak the parameters of the physics simulation itself to annihilate clipping altogether. They can be edited in Project Settings
, and seem to include several fields concerned with “offshoot.” I see several candidates for help:
- Position iterations
- If iterations help more precise motion, though the description doesn’t mention overshoot
- Velocity threshold
- If elastic collisions deliberately permit some level of intersection
- Max linear correction
- mentions overshoot directly
- Baumgarte scale
- Mentions overshoot directly
- Baumgarte time of impact scale
- Mentions overshoot directly
- Default contact offset
- Get collision before there is a chance to overshoot
- Similar to my goal with the edge radius
- Unity urges Extreme caution
I’ll start with setting default contact offset to 0.05. I noticed some jitter while testing, and while I didn’t see a corner issue, some shots passed through a wall. I’ll leave it at 0.02.
Next, I try changing baumgarte scale from 0.2 to 0.1, and baumgarte TOI scale from 0.75 to 0.25. It’s possible this caused bad bounces to happen more often.
After fiddling around with various settings, I’ve decided I can’t get out of this mess without being more serious about my “bounce velocity” logic. Just raycasting won’t be enough. I can track the last collider hit in order to ignore it until I’ve hit another. This is fine for convex colliders, the only kind there are. If I want to bounce twice on the same tetronimo, I will hit two different colliders. The tight corner problem remains, but I can detect tight corner situations and handle them gingerly. Projecting 3 or 4 collisions ahead may be the best bet.
Additionally, I can increase the rate of FixedUpdate
from 50 to 100, if possible, to help increase precision. I’d also like to interrogate whether capsule colliders are better behaved than rounded box colliders.
All that and more, next time.
Cover Art: Joi Ito, Square watermelons, Creative Commons attribution