Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can Angular minify, compress and remove unused JS and CSS files on build?

I have an Angular 16 application which uses the progressive web app (PWA) feature and server-side rendering. We also updated the majority of our components to standalone and lazy loaded our modules.

Since Google changed its indexing rules recently, we've been struggling to improve our FCP/LCP.

Is there any way to minify, compress and remove unused JS and CSS files on build to ensure PWA compatibility and achieve better FCP/LCP notation?

like image 748
Sylions Avatar asked Oct 15 '25 00:10

Sylions


1 Answers

Your question is quite too complex to be easily answered. There are so many variables impacting load performance of an application. But i'll try to give you a quick overview.

1. Can Angular remove unused JS and CSS. (tl;dr: Yes)

Technically not Angular, but the used bundler can remove unsued code. If you build it with Angular CLI) it's doing that by default, if you're doing a production build. That functionality is called Tree Shaking and both webpack and esbuild are doing it.

Tree shaking is a term commonly used in the JavaScript context for dead-code elimination.

However, how good Tree Shaking works heavily depends on how your code is structured and which libraries you're using.
Just to name some common mistakes:

  • Adding too many styles to the global styles.css. => Try to keep your styles local in the component that really need it.
  • Using SCSS imports for anything but mixins in component styles. Since styles for each component are compiled separately, the same styles might end up multiple times in your build artifact. => Try to use components to share styles.
  • Usage of non tree-shakable libaries. => Try to keep the amount of 3rd party libaries small and ensure the ones you use don't add too much weight.

Best way to check your bundle size with a bundle analyzer and check what could be removed to optimize the bundle size.

Just as a reference, the minified Estimated transfer size for Initial chunk files can even for extremely complex applications be around 160-180kb. You can check the output of the Angular CLI) build for your current values. However, there will be most likely additional chunks required to show the LCP of your page.

2. Can Angular minify JS and CSS code. (tl;dr: Yes)

Again it's the bundler who is responsible for minification. For Angular CLI) you can configure it in the "Optimization configuration" in the "Angular workspace configuration".

The optimization option can be either a Boolean or an Object for more fine-tune configuration. This option enables various optimizations of the build output, including:

Minification of scripts and styles
Tree-shaking
Dead-code elimination
Inlining of critical CSS
Fonts inlining

3. Can Angular compress JS and CSS code. (tl;dr: No)

Compression is neither in the responsibility of Angular nor of the bundler. However, usually your CDN can dynamically compress your source files before delivering it. To get higher compression ratio it's a good idea to create statically precompressed brotli files. While most, but not all clients support brotli, be sure to deliver the precompressed files only to clients that support it.

Additional Information

  1. With server-side rendering (SSR) and proper caching your FCP and LCP should not be a problem anyways, because LCP usually can be shown even before your application loaded the JS code. However, Interaction to Next Paint (INP) might be more problematic with SSR, because users might try to interact with the page while it's still hydrating.
  2. To improve FCP without SSR you could use a loading-spinner. For UX it's a good idea to only show it, if loading really takes longer than ~1s.
  3. If you update to Angular 17 you could use Angulars Deferrable Views to easily lazy load components that are not required for LCP.
Deferrable views can be used in component template to defer the loading of select dependencies within that template.

However, be aware that Deferrable Views are not rendered in SSR, so it can be problematic for SEO relevant content.

When rendering an application on the server (either using SSR or SSG), defer blocks always render their @placeholder (or nothing if a placeholder is not specified). Triggers are ignored on the server.

Update 03. April 2024: Since performance optimizations is a very wide field here are some specific actions that can be taken based on the example "app.gudule.co", which was the reason the question was raised. Those suggestions are based on the current state of the application on 03. April 2024. I think it's a good addition to the generic answer above, because others might face similar issues and examples are more simple to understand. However, these are just a few first simple steps, there are a lot more possibilities, but hope it's enough to get proper performance.

  1. Use a CDN
    Currently the whole page including the static assets are delivered through the node service. This is not only slow, but also expensive and might lead to problems on redeploys. If the service is redeployed it might be that static assets, like js chunks, are not available anymore. But not all users reload the page immediatly after a deployment => they might get a broken application on navigation because of missing assets. It was mentioned in an older version of the question that the application is running on Azure App Service. An obvious choice would be using Azure Front-Door as a CDN, but other CDNs like Cloudflare would work too. Additionnaly the static assets could be delivered from a storage (Azure Blob Storage, AWS S3 or similar) instead of the service.
  2. Cache the prerendered html pages
    The delivery of the initial html response is quite slow (for me >300ms). It seems as it's going to be rendered each time. This should be cached, which would increase the performance a lot and also should reduce cost. There are a lot of ways to cache it. One would be putting it behind a CDN that supports stale-whhile-revalidate (Azure Front-Door does not). However, since the app looks like there are not too many pages, storing it in the memory (like just a plain node object) might be fine. If it's taking too much memory Redis might be an alternative option. This alone should improve FCP&LCP by around 200-300ms.
  3. Proper brotli compression
    The effort might not be worth it, but static precompression would reduce for example the main.js bundle size from 532KB (currently dynamically gziped, so there is already compression in place) to 424KB. Which is ~20% less js and css that needs to be transferred.
  4. Reduce global styles size
    Global styles are way to big. 161kb that are blocking the page to be displayed at all. I personally prefer not to use global styles file at all. However, if you use it, keep it super small and add only the things you really need.
  5. Reduce main.js size
    As mentioned in 3. the main.js bundle size is quite big with 400-500kb. Try to bring it down as much as possible with lazy loading and removal of stuff you don't need for your LCP. As mentioned in the original answer, Bundle Analyzers are a great tool to do that.
  6. Add defer on angular script tags
    On a server side rendered page the main.js, polyfills.js and runtime.js are not required to show FCP and LCP. But currently it's waiting for them before anything is shown, because they don't have the defer attribute. One option would be just to add it in the server.ts after rendering the page and before delivering it.

Let's estimate what effect 2., 4. and 6. would have on LCP. So before it's looking for a desktop like: requests before

It's taking in that specific test 2.1s for FCP and LCP. Let's assume suggestion 2 is done and your server is responding in 70ms with the cache. That would reduce the overall LCP by 135ms (that are currently used to render the page).
If global CSS (suggestion 4) is reduced by 50% it would save another 500ms.
Finally 6. setting the angular js files to defer would remove the requirement for them to be loaded to show the FCP and LCP. Since the images are either to small or for most users below the fold, they should have no effect on the LCP. Overall with those improvements alone I would expect the LCP for such a synthetic test (desktop with slow DSL connection) to be reduced to around 1,2 seconds.

like image 97
Christoph Stickel Avatar answered Oct 17 '25 15:10

Christoph Stickel