Get answers and suggestions for various questions from here

Front-end visual interaction - vertex animation to achieve excessive image


Watch the best experience on my BLOG.




This is an example of a vertex animation implemented by an atypical vertex shader. It constructs a plane that can be shattered, allowing me to make a smooth transition between the two images during the fragmentation process.

Use vertices to form triangles

Seeing this, you need a basic understanding of the underlying OpenGL graphics. We know that OpenGL draws a surface, essentially drawing a triangle face formed by vertices. These triangle faces are combined to form a whole surface. And these vertex data is stored in the buffer, we need to generate a buffer at the beginning, and then submit it as an attribute to the GPU at one time, then you can control the rendering with a small amount of uniform variables.

Taking ThreeJS as an example, in order to draw a triangle, we need to do a few steps:

  1. Construct Buffer-based geometry and generate vertex data:
function disposeArray() {
  this.array = null;

const geometry = new THREE.BufferGeometry();
const positions = [x0, y0, z0, x1, y1, z1, x2, y2, z2];
geometry.addAttribute('position', new THREE.Float32BufferAttribute(positions, 3).onUpload(disposeArray));

Note that in the last sentence, we put positions each of the three data in a group (x, y, z), position submit it as an attribute to the GPU, and recycle the array on the CPU side after committing to avoid wasting memory. After submitting, we can use the data in the shader. Note that the shader is processed for each vertex, so the natural vec3 variable is:

attribute vec3 position;
  1. Write a shader and construct a material:
const material = new THREE.RawShaderMaterial({
    vertexShader: shaders.vertex,
    fragmentShader: shaders.fragment

material.needsUpdate = true;

Here, we pass in the constructed uniforms (mainly to pass in textures and matrices, sometimes for animation), plus vertexShader and fragmentShader to complete the construction of the material.

  1. Generate surface:
const mesh = new THREE.Mesh(geometry, material);

After adding the surface to the scene, start rendering and you'll see a triangle.

Use a triangle to construct a plane

It is certainly not enough to have a triangle for light. Next we have to construct a plane. This is actually very simple, just like to draw a gourd to repeat the construction of the triangle is OK. Let's think about it - how can a rectangle be constructed with triangles? Of course, it is made up of two symmetrical triangles:

const positions = [l, t, 0, r, t, 0, l, b, 0, l, b, 0, r, b, 0, r, t, 0];

Here I construct two triangles that are put together, and together they form a rectangle, where l, r, t, and b respectively represent the left and right upper and lower boundaries of the rectangle. Since it is in the xy plane, z is 0.

However, this is not enough. We need debris, many fragments, but this is not difficult. Just divide a large rectangle into several small rectangles, and then divide each small rectangle into two triangles. Well:

const stepX = .1;
const stepY = .05;
const hStepX = stepX;
const hStepY = stepY;

for (let x = left; x < right; x += stepX) {
  for (let y = top; y < bottom; y += stepY) {
    const xL = x;
    const xR = x + hStepX;
    const yT = y;
    const yB = y + hStepY;
    positions.push(xL, yT, 0);
    positions.push(xL, yB, 0);
    positions.push(xR, yB, 0);
    positions.push(xL, yT, 0);
    positions.push(xR, yT, 0);
    positions.push(xR, yB, 0);

Where stepX and stepY are the width and height of the small rectangle, respectively. With this code, we generate a large rectangle composed of many small broken triangles.


All of the above are how to generate vertices, but there is still a very important step not to say - how to color the triangle? Students who have a certain foundation should know that in the fragment shader, we usually sample the texture output color through uv coordinates. Have you ever thought about where this uv came from? Yes, this uv is actually passed from the attribute variable in the vertex shader, and this attribute is the position same as the CPU (stored in the model vertex data):

for (let x = left; x < right; x += stepX) {
  for (let y = top; y < bottom; y += stepY) {
    // positions    ......
    uvs.push((xL + right) / width, (yT + bottom) / height);
    uvs.push((xL + right) / width, (yB + bottom) / height);
    uvs.push((xR + right) / width, (yB + bottom) / height);
    uvs.push((xL + right) / width, (yT + bottom) / height);
    uvs.push((xR + right) / width, (yT + bottom) / height);
    uvs.push((xR + right) / width, (yB + bottom) / height);

geometry.addAttribute('uv', new THREE.Float32BufferAttribute(uvs, 2).onUpload(disposeArray));

Here, we calculate the uv coordinates of each vertex and uv submit it to the GPU as this attribute, which can then be used in the shader:

attribute vec2 uv;

Break it! Add attribute connection vertex

At this point, we should be able to render a normal picture. I know what you want to say: Is it so big to render a picture? Don't worry, we only need a little trick to achieve a simple fragmentation effect:

vec3 new_position = position;
new_position.z += position.x;

gl_Position = projectionMatrix * modelViewMatrix * vec4(new_position, 1.0);

Through these few codes, we shift the z coordinate of each vertex by the distance of its x coordinate. In theory, I wrote this to achieve a "left to right, triangle layer spread layer by layer" effect. However, it is counterproductive. If you run this code, you will find that the whole picture is continuous, but the skewing in the xz plane occurs. Think about it, what is the reason for this happening? In fact, it is very simple. For two adjacent triangles, two of their three vertices are coincident. The coincident vertices will naturally be consistent if they are not processed. In this way, all The coincident vertices can actually be treated as a vertex, so that the transformed image is still continuous.

In order to solve this problem, we have to give new attribute variables that are different from the vertices of different triangles to indicate that they are different from the attributes of coincidence points. For example, in the 3D model, there is a normal normal attribute that indicates the normal direction of the vertex. In this example, we can construct a centre property called which indicates the center point of each triangle, and then let all three vertices have the same centre properties:

for (let x = left; x < right; x += stepX) {
  for (let y = top; y < bottom; y += stepY) {
    // positions, uvs    ......
    for (let i = 0; i < 3; i += 1) {
      centres.push(xL + (xR - xL) / 4, (yT + yB) / 2, 0);

    for (let i = 0; i < 3; i += 1) {
      centres.push(xR - (xR - xL) / 4, (yT + yB) / 2, 0);

geometry.addAttribute('centre', new THREE.Float32BufferAttribute(centres, 3).onUpload(disposeArray));

The transformation of the vertices is then based on the position of this center point, so that each vertices that are coincident but not in the same triangle face can be distinguished:

new_position.z += centre.x;

Move up

Now, we can statically break a picture we have given textures, but how can we make this breaking process move?

The solution here is to introduce an external uniform variable progress , which is an auto-increment variable with a range of 0 to 1, which indicates the progress of the motion. Combine this variable with some other variables, plus your favorite vertex transformation logic (formula), we can achieve a lot of amazing effects, such as this example, I use the distance between the center point of the image and the vertices as the benchmark, combined progress with the triangle The function controls the x, y coordinates and gives a certain difference to the z coordinate of each vertex, and finally adds rotation to make the whole effect more dynamic:

attribute vec3 position;
attribute vec3 centre;
attribute vec2 uv;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform float progress
;uniform float top;
uniform float left;
uniform float width;
uniform float height;
varying vec2 vUv;

vec3 rotate_around_z_in_degrees(vec3 vertex, float degree) {
    float alpha = degree * 3.14 / 180.0;
    float sina = sin(alpha);
    float cosa = cos(alpha);
    mat2 m = mat2(cosa, -sina, sina, cosa);

    return vec3(m * vertex.xy, vertex.z).xyz;

void main() {
    vUv = uv;
    vec3 new_position = position;
    vec3 center = vec3(left + width * 0.5, top + height * 0.5, 0);
    vec3 dist = center - centre;
    float len = length(dist);
    float factor;

    if (progress < 0.5) {
        factor = progress;
    } else {
        factor = (1. - progress);

    float factor1 = len * factor * 10.;
    new_position.x -= sin(dist.x * factor1);
    new_position.y -= sin(dist.y * factor1);
    new_position.z += factor1;
    new_position = rotate_around_z_in_degrees(new_position, progress * 360.);

    gl_Position = projectionMatrix * modelViewMatrix * vec4(new_position, 1.0);

Two pictures natural transition

The vertex animation ends here. For this effect, the last thing to consider is how to make the two images transition naturally. In fact, this is reflected in the vertex shader above - I used 0.5 as the dividing point to divide the entire motion period into two parts. The first part of the plane is gradually broken, and the second part of the fragment gradually shrinks back to the plane.

With the vertex shader, you can easily complete the natural transition of the two textures by writing the fragment shader:

uniform sampler2D image1;
uniform sampler2D image2;
uniform float progress;
varying vec2 vUv;

void main() {
  vec4 t_image;
  vec4 t_image1 = texture2D(image1, vUv);
  vec4 t_image2 = texture2D(image2, vUv);

  t_image = progress * t_image1 + (1. - progress) * t_image2;
  gl_FragColor = t_image;

Use progress , the two textures can be mixed according to different weights.