Frame is the king of layout. Everybody uses frames to position and resize their `UIView`

s and `CALayer`

s. Throughout this post I’m going to focus my attention on `CALayer`

, as this is the underlying workhorse of `UIView`

and `view.frame`

simply returns `view.layer.frame`

. Moreover, I will not discuss the `setFrame:`

setter. While the scope might seem very limited, it actually turns out there is a lot of fun stuff going on inside a plain, old `frame`

getter.

## What Frame Depends On

It’s generally understood that `frame`

is just a derived property, and it actually depends on some other properties instead. There are actually four (!) properties that get taken into account when calculating frame: `bounds`

, `anchorPoint`

, `transform`

, and `position`

.

Let’s start with `bounds`

. Bounds are tricky—they mix both the exterior and the interior of the layer. `bounds.size`

defines the dimensions of the layer itself, declaring the area in which the layer exists. Setting `masksToBounds`

to `YES`

visualises this area by clipping all the sublayers that managed to escape the bounds. On the other hand, `origin`

of `bounds`

doesn’t affect the layout of the layer itself; however, it changes how sublayers are positioned within the layer. `bounds.origin`

defines, well, the origin of the layer’s internal coordinate system.

Here’s a quick example of how `bounds.origin`

works. For instance, let’s define `bounds.origin`

as `CGPointMake (20.0f, 30.0f)`

How do you define the local coordinate system? Just slap the `bounds.origin`

point on the top left corner of layer’s rect:

`anchorPoint`

is a slightly different beast. First of all, its values are normalised to the 0.0 – 1.0 range. Getting values in units of “points” requires multiplying normalized values by the layer’s `bounds.size`

. More importantly however, `anchorPoint`

defines the origin of the coordinate system in which transforms get applied:

Transforming original layers (blue) with same `bounds`

but different `anchorPoint`

s, changes the layout a lot (gray).

`position`

is actually the easiest one. It defines the final translation that is added to the layer after all the mangling with `bounds.size`

, `anchorPoint`

, and `transform`

.

## A Quick Discourse on Precision

While I was working on this post, I noticed my calculations where sometimes slightly off in comparison to what CoreAnimation returned. Either I was doing something wrong, or I had precision issues. Quite naturally, I opted for checking precision issues first. Fortunately, my gut feeling was right. While `CGFloat`

on a 32-bit architecture is just a typedefed `float`

(a `double`

on 64-bit ones), it seems that CoreAnimation uses `double`

s internally, regardless of `CGFloat`

‘s actual type.

Testing that wasn’t really hard. I jumped into Hopper, checked what `frame`

getter of `CALayer`

calls, and discovered a `mat4_apply_to_rect`

function. Then, I set a symbolic breakpoint on it, which actually added two breakpoints on both `CA::Mat4Impl::mat4_apply_to_rect(double const*, double*)`

and `CA::Mat4Impl::mat4_apply_to_rect(float const*, float*)`

, which would suggest that two execution paths are possible (?). However, while running on the device, it stops on the `double`

version, despite using a 32-bit ARM iPhone.

It’s clear that a visual difference between code using `float`

and `double`

would be noticeable only in some pathological cases. However, since our goal is to reverse-engineer CoreAnimation and receive exactly the same results, we should use `double`

s as well. Let’s define some very primitive structures that are equivalent to their CoreGraphics counterparts:

```
typedef struct MCSDoublePoint {
double x, y;
} MCSDoublePoint;
typedef struct MCSDoubleSize {
double width, height;
} MCSDoubleSize;
typedef struct MCSDoubleRect {
MCSDoublePoint origin;
MCSDoubleSize size;
} MCSDoubleRect;
```

It’s worth noting that deploying for iOS on a 64-bit would render our carefully crafted `struct`

s redundant, since on this architecture, `CGPoint`

, `CGSize`

, and `CGRect`

use `double`

s anyway.

## Transforms

Before we dissect into getting frame, let’s get done with transforms. Although `CALayer`

makes use of a full-fledged 4×4 matrix disguised as `CATransform3D`

, it doesn’t really matter for the purpose of calculating `frame`

. For this reason, I’m going to focus on `CGAffineTransform`

instead, which can be easily obtained from `CATransform3D`

by using everyone’s favorite `CATransform3DGetAffineTransform`

.

Let’s get started with points. Transforming points using affine transform is algebra 101:

```
MCSDoublePoint MCSDoublePointApplyTransform(MCSDoublePoint point, CGAffineTransform t)
{
MCSDoublePoint p;
p.x = (double)t.a * point.x + (double)t.c * point.y + t.tx;
p.y = (double)t.b * point.x + (double)t.d * point.y + t.ty;
return p;
}
```

This implementation is based on `CGPointApplyAffineTransform`

. It basically multiplies a 3×3 transform matrix by a 3-dimensional vector.

The matrix is filled with values of `CGAffineTransform`

, and the multiplied vector consists of the point’s x coordinate, y coordinate, and `1.0`

, causing the resulting vector to get translation components from the matrix as well.

Using point transformations, we can easily transform rectangles—applying transform to the rect’s corners and connecting them with lines creates a parallelogram (in a general case, it can be any quad). However, this is *not* how `CGRectApplyAffineTransform`

works. This function takes a `CGRect`

and returns a `CGRect`

. As the comment inside `CGAffineTransform.h`

header claims,

Since affine transforms do not preserve rectangles in general, this function returns the smallest rectangle that contains the transformed corner points of

`rect`

.

Having read that, recreating `CGRectApplyAffineTransform`

using doubles is relatively straightforward:

```
MCSDoubleRect MCSDoubleRectApplyTransform(MCSDoubleRect rect, CGAffineTransform transform)
{
double xMin = rect.origin.x;
double xMax = rect.origin.x + rect.size.width;
double yMin = rect.origin.y;
double yMax = rect.origin.y + rect.size.height;
MCSDoublePoint points[4] = {
[0] = MCSDoublePointApplyTransform((MCSDoublePoint){xMin, yMin}, transform),
[1] = MCSDoublePointApplyTransform((MCSDoublePoint){xMin, yMax}, transform),
[2] = MCSDoublePointApplyTransform((MCSDoublePoint){xMax, yMin}, transform),
[3] = MCSDoublePointApplyTransform((MCSDoublePoint){xMax, yMax}, transform),
};
double newXMin = INFINITY;
double newXMax = -INFINITY;
double newYMin = INFINITY;
double newYMax = -INFINITY;
for (int i = 0; i < 4; i++) {
newXMax = MAX(newXMax, points[i].x);
newYMax = MAX(newYMax, points[i].y);
newXMin = MIN(newXMin, points[i].x);
newYMin = MIN(newYMin, points[i].y);
}
MCSDoubleRect result = {newXMin, newYMin, newXMax - newXMin, newYMax - newYMin};
return result;
}
```

We calculate the coordinates of all four corner points; we transform them and get the extreme values for both the `x`

and `y`

axes.

## Calculating Frame

Now that we've come through all this effort to understand everything that matters, deriving frame is going to be a blast:

- Define a rect with a size of
`bounds.size`

- Calculate the position of the
`anchorPoint`

within this rect

- Place the rect inside a coordinate system, putting its
`anchorPoint`

at the system's origin

- Apply whatever crazy
`transform`

you have, and keep "the smallest rectangle that contains the transformed corner points"

- Offset the
`anchorPoint`

by`position`

- Your frame is in gray.

Here's the code that does all this:

```
- (CGRect)frameWithBounds:(CGRect)bounds anchorPoint:(CGPoint)anchorPoint transform:(CATransform3D)transform position:(CGPoint)position
{
MCSDoubleRect rect;
rect.size.width = bounds.size.width;
rect.size.height = bounds.size.height;
rect.origin.x = (double)-bounds.size.width * anchorPoint.x;
rect.origin.y = (double)-bounds.size.height * anchorPoint.y;
rect = MCSDoubleRectApplyTransform(rect, CATransform3DGetAffineTransform(transform));
rect.origin.x += position.x;
rect.origin.y += position.y;
return CGRectMake(rect.origin.x, rect.origin.y, rect.size.width, rect.size.height);
}
```

While it's not many lines of code, it makes use of all the concepts we've discussed.

## How This Maps to `UIView`

In terms of getters for `frame`

, `bounds`

, and `center`

, `UIView`

doesn't really do any work at all; it simply calls its backing CALayer methods `frame`

, `bounds`

, and `position`

, respectively.

Note that the `center`

maps to `position`

—this means that changing the underlying `layer`

’s `anchorPoint`

makes the `center`

technically incorrect, as it doesn't correspond to the layer's "center," or “point in the middle”, of the layer's bounding rect.

Pingback: Macoscope Blog | So You Have a Technical Interview at Macoscope()