Cache HTML Fragment or how to optimize the tree menu?

Is it possible to cache rendered HTML fragments?

I found that my treemenu snippet (from the cookbook) is adding to the rendering time (even if not going all levels down the tree)

So how could I deal with that? I thought about caching the rendered html, but maybe that’s not a good idea, if I work with class="active" (but caching fragments would be good anyways)

I appreciate any answer.

Caching the rendererd HTML of the treemenu definitely won’t work, that’s right. But I wonder what fragments you want to store? There is not much that doesn’t potentially have an active class.

Can’t you use the page cache? And why is the treemenu so slow? How many levels and pages are we talking here?

You could probably leave the active class out in the snippet and inject the class via JS?

There need to be really a lot of pages for this to matter, but have you considered caching metadata instead? idk; in a json file for example? I mean, if it’s the indexing of pages that is slow, cache that. You could even do this in page event hooks. Then generate the html from that.

Thanks for asking the right questions that got me to rethink a few things and dig deeper.

My page tree consists of a handful (15?) kirby pages and 150+ virtual pages from a database, nested 3 to 4 levels deep.

I’m using @ahmetbora https://github.com/afbora/kirby-blade which works great and I can live with the overhead of the Blade Engine, compared to native php templating, but it looks like recursive calling of the treemenu snippet (or partial in blade) is adding up.

Here is my test setup.
I measured the time with Clockwork (I mentioned the setup here in the forum before)

@php
	clock()->startEvent('treemenu', "Treemenu");
	snippet('treemenu');
	clock()->endEvent('treemenu');
@endphp

@php
    clock()->startEvent('megamenu', "Megamenu")
@endphp

@include('partials.treemenu-mega')

@php
    clock()->endEvent('megamenu');
@endphp

treemenu-mega.blade.php (this is a stripped down and customized version just to illustrate the case)

@php
if(!isset($subpages)) $subpages = $site->children();
@endphp

@foreach($subpages->listed() as $p)
		@if($p->hasListedChildren() && $p->depth() < 3 )
			<ul>
				@include('partials.treemenu-mega', ['subpages' => $p->children()])
			</ul>
		@endif
	</li>
@endforeach

and snippet/treemenu.php (the original from the cookbook)

<?php if(!isset($subpages)) $subpages = $site->children() ?>
<ul>
	<?php foreach($subpages->listed() as $p): ?>
		<li class="depth-<?= $p->depth() ?>">
			<a<?php e($p->isActive(), ' class="active"') ?> href="<?= $p->url() ?>"><?= $p->title()->html() ?></a>
			<?php if($p->hasChildren()): ?>
				<?php snippet('treemenu', ['subpages' => $p->children()]) ?>
			<?php endif ?>
		</li>
	<?php endforeach ?>
</ul>

The native treemenu is much much faster compared to the megamenu
The entries below are calls to ProductsPage->children() where I build the page array with the virtual pages array from the database. Pretty insignificant performance wise.

I believe the page with the blade menu would still be fast enough once it’s live (optimized hosting on Linux and not some weird localhost windows xampp setup), clockwork debugging overhead turned off (xdebug was not on for my tests, because that degrades performance even more), but I think it’s better to act now and optimize that bit than do it later. It only get worse as the page grows.

When I was asking for caching html fragments, I was thinking about this article: https://signalvnoise.com/posts/3112-how-basecamp-next-got-to-be-so-damn-fast-without-using-much-client-side-ui (=> #2 Caching TO THE MAX)

Of course, caching the whole page has an immediate effect and bring down rendering to 190ms in this case. With this approach I just fear, with a large site, that I might forget about pages to exclude from caching.

If some one has to add more to this topic that would be appreciated.

How are you querying for your children?

I mean it’s seems tempting to do something like SELECT * FROM pages WHERE parentId = :id. But that implies creating a query for each page.

That would then mean that there’s a quite big difference between having, as an example, 3 pages with each 50 children and having 50 pages with 3 children each. A difference between creating 3 queries or 50.
Maybe for the menu, if there are “only” 150 pages to load, it would be faster to just load them all in 1 query, maybe sorted by parent, and then create the tree structure in PHP from that.

EDIT: Of course if the clockwork chart is correct and the cumulative time for the queries really is just 4 -28ms - this isn’t relevant

It is indeed as you describe it - there is a query for each category.
I thought about getting the whole structure in one query, but that’s the last resort :slight_smile:

Right now I’m experimenting with rewriting some things like assigning method calls to a variable and reuse that.

<?php foreach($children as $p): ?>

<?= ($p->hasListedChildren()) ? 'x' : 'y' ?>
<?= ($p->hasListedChildren()) ? 'a' : 'b' ?>
<?= ($p->hasListedChildren()) ? 'c' : 'd' ?>

<?php endforeach; ?>

to

<?php foreach($children as $p): ?>
<?php 

$hasListedChildren = $p->hasListedChildren();

?>
<?= ($hasListedChildren) ? 'x' : 'y' ?>
<?= ($hasListedChildren) ? 'a' : 'b' ?>
<?= ($hasListedChildren) ? 'c' : 'd' ?>

<?php endforeach; ?>

That already makes a difference.

Ok, I think I have to take back my assumption about Blade.

The culprit, and that wasn’t clear in my former code examples was indeed using several $p->hasListedChildren() during each foreach loop. Assigning them to a variable at the beginning of the loop brought down the rendering time dramatically.

I didn’t expect that, but I guess that’s why performance tests are ran in loops :slight_smile:

@php
if(!isset($subpages)) $subpages = $site->children();
@endphp

@foreach($subpages->listed() as $p)

	@php
	// cache vars
	$depth = $p->depth();
	$hasListedChildren = $p->hasListedChildren();
	@endphp


	<li class="
		depth-{{ $depth }}
		{{ ($depth === 1) ? ' nav-item ' . $p->pageRootClass() : '' }}
		{{ ($hasListedChildren && $depth < 2) ? ' dropdown ' :'' }}
		{{ ($hasListedChildren ) ? ' has-dimmer ' :'' }}
		"
		data-has-listed-children="{{ $hasListedChildren }}"

		>
		<a
			class="
				nav-link
				{{ ($depth > 1) ? ' dropdown-item ' : '' }}
				{{ (!$p->isActive()) ? '' : ' active ' }}
				@if($hasListedChildren)
					{!! ($depth < 2) ? ' dropdown-toggle ' :' icon-arrow ' !!}
				@endif
			"

			href="{{ $p->url() }}"

			{!! ($hasListedChildren && $depth < 2) ? '' : ' up-dash ' !!}
			{!! ($hasListedChildren && $depth < 2) ? '' : ' data-prefetch="true" ' !!}


			{!! ($hasListedChildren && $depth < 2) ? ' data-toggle="dropdown" ' : null !!}
			{!! ($hasListedChildren && $depth < 2) ? ' data-target="dropdown-' . md5($p->url()) . '" ' : null !!}
			>
			{{ $p->title() }}
		</a>
		@if($hasListedChildren && $depth < 3 )
			<ul
				id="dropdown-{{ md5($p->url()) }}"
			 	class="
				dropdown-menu | dropdown-menu-dark

				{!! ($hasListedChildren && $depth >= 2) ? ' submenu ' : '' !!}

				"
				>
				@include('partials.treemenu-mega', ['subpages' => $p->children()])
			</ul>
		@endif
	</li>
@endforeach

That example cries for a best practices recipe, it’s something I see n often that stuff is queried more than once instead of stored in a variable.

1 Like

It would be good to include it right into the treemenu snippets.

It’s something one might copy and paste in the early stage of the project (as I did) and forget about it.
Since those page methods were so handy I didn’t thought about too much what happens under the hood and I didn’t want to clutter the template with extra variable declarations. And it really doesn’t matter for a small site, but when the site suddenly grows with 150 virtual pages, the performance degradation was noticeable.

Thank you all for your help narrowing down the problem.