Thanks for asking the right questions that got me to rethink a few things and dig deeper.
My page tree consists of a handful (15?) kirby pages and 150+ virtual pages from a database, nested 3 to 4 levels deep.
I’m using @ahmetbora https://github.com/afbora/kirby-blade which works great and I can live with the overhead of the Blade Engine, compared to native php templating, but it looks like recursive calling of the treemenu snippet (or partial in blade) is adding up.
Here is my test setup.
I measured the time with Clockwork (I mentioned the setup here in the forum before)
@php
clock()->startEvent('treemenu', "Treemenu");
snippet('treemenu');
clock()->endEvent('treemenu');
@endphp
@php
clock()->startEvent('megamenu', "Megamenu")
@endphp
@include('partials.treemenu-mega')
@php
clock()->endEvent('megamenu');
@endphp
treemenu-mega.blade.php (this is a stripped down and customized version just to illustrate the case)
@php
if(!isset($subpages)) $subpages = $site->children();
@endphp
@foreach($subpages->listed() as $p)
@if($p->hasListedChildren() && $p->depth() < 3 )
<ul>
@include('partials.treemenu-mega', ['subpages' => $p->children()])
</ul>
@endif
</li>
@endforeach
and snippet/treemenu.php (the original from the cookbook)
<?php if(!isset($subpages)) $subpages = $site->children() ?>
<ul>
<?php foreach($subpages->listed() as $p): ?>
<li class="depth-<?= $p->depth() ?>">
<a<?php e($p->isActive(), ' class="active"') ?> href="<?= $p->url() ?>"><?= $p->title()->html() ?></a>
<?php if($p->hasChildren()): ?>
<?php snippet('treemenu', ['subpages' => $p->children()]) ?>
<?php endif ?>
</li>
<?php endforeach ?>
</ul>
The native treemenu is much much faster compared to the megamenu
The entries below are calls to ProductsPage->children()
where I build the page array with the virtual pages array from the database. Pretty insignificant performance wise.
I believe the page with the blade menu would still be fast enough once it’s live (optimized hosting on Linux and not some weird localhost windows xampp setup), clockwork debugging overhead turned off (xdebug was not on for my tests, because that degrades performance even more), but I think it’s better to act now and optimize that bit than do it later. It only get worse as the page grows.
When I was asking for caching html fragments, I was thinking about this article: https://signalvnoise.com/posts/3112-how-basecamp-next-got-to-be-so-damn-fast-without-using-much-client-side-ui (=> #2 Caching TO THE MAX)
Of course, caching the whole page has an immediate effect and bring down rendering to 190ms in this case. With this approach I just fear, with a large site, that I might forget about pages to exclude from caching.
If some one has to add more to this topic that would be appreciated.