Episode 241 | September 16th, 2025
Hello everyone and welcome to another episode of the Modernize or Die Podcast.
Is episode number 241 is September 16th, 2025.
My name is Daniel Garcia.
I am a senior developer here at Ortus Solutions and with me is...
Luis Majano.
Woohoo!
I'm here!
and founder and everything else that you do for Ortus Solutions.
How you doing Luis?
I'm doing good, man.
I'm doing good.
I'm super excited to be hosting with you.
It's been a while, to tell you the truth, so I'm really excited.
Nice.
Well, you've been so busy doing all sorts of stuff, traveling all around the world.
I kind of feel like, remember that old TV show, Carmen Sandiego?
It's like, in the world is Luis Majano?
You're in Japan recently.
You're in who knows where, just all over the place.
Yes, yes, we can talk a little bit about that.
uh The Japan experience was definitely incredible.
lots to Chat about if you want about the that area of development that we're doing in
Asia.
So it was it was pretty exciting.
Well, let's just dive in.
things off with Ortus news right away.
We're going to talk about the Ortus stuff and then we're going to talk about the Boxlang
stuff and off the gate with the Ortus stuff, we've got new versions of.
lots of goodies lots of goodies coming ColdBox eight the ocho
Ocho!
Yes, the Ocho.
I'm very excited about the Ocho, by the way.
Me too.
That's funny.
It's like, you know, Dodgeball on ESPN, the Ocho, ColdBox , the Ocho.
know.
Well, I'm glad you catch that reference because that movie was just amazing for me.
uh So dumb, but so good.
So yeah.
could dodge them all.
The Ocho is coming!
Awesome, so what's going to be included with that?
Well, this has been really a lot of development through time.
Man, think it's been quite a few months since we started development for ColdBox eight.
Obviously the big, big feature is BoxLang, right?
Obviously BoxLang is our new baby.
And so getting all our applications basically consolidated and compatible with BoxLang has
been a lot of things that we've done through the past.
With the Ocho,
The difference is that there is no more CFML compatibility mode.
So this means that ColdBox 8 will run natively on BoxLang.
You would not need the CFML compat.
So you can build truly, truly uh native BoxLang applications without basically the CFML, I
would say overhead of the legacy compatibility module that we have for it.
So that's a big one.
That took a long time.
Eric Peterson started that branch.
a couple of months ago and basically I had to go through almost every single line of
source and or framework to basically make sure that it can work on both engines right at
this point right so in CFML it works in one way and in BoxLang it works another way so
that was a huge deal for us obviously that know the CFML community is our community that
we've been here for almost 20 years believe it or not ColdBox the Ocho releases on the
19th year
So um correct.
Easier to analyze, easier to read.
There's email integration so you can actually share the stack traces, whether they're
enhanced or not.
and also AI integration.
basically uh I compose.
Yeah, so you basically, when you get an exception, you can basically have another button
that says ask the AI how to solve it.
And it'll give you a pop-up of the entire prompt that you can basically, well, it'll
actually open in Claude and ChatGPT for now.
We'll be adding more later.
And a traditional just kind of copy and put into your prompt, right?
And this will basically take all the information about the exception, everything that's
there available, except your source code.
I still, that's something that I haven't done for privacy reasons.
And that will allow you to basically get, and it's really accurate.
I've tried it with several of our, or my tests here, and it's basically solved the issue
extremely fast.
Now, the next version of ColdBox, the 8.1 and the 4th, if it detects that you're on
BoxLang
and using the BXAI module, then it will do it automatically in line.
So it will basically, but you will have to obviously give it permission, right?
That's the point, right?
So it will be proactively.
So if you have BXAI installed and there's a flag for it in the settings, then basically
you don't have to go and take the prompt and all the data to your tooling.
It will basically be integrated, right?
And you will select your LLM of choice at that
does any other language is doing that right now?
I don't feel like I've like Kotlin's not some of the other Java language.
This might be like a brand new thing.
I'm sure people are thinking about it, but,
I think I saw Lucee have an extended error page at CF Camp this year.
So I think they're dabbling with that ability.
But I think it's kind of all or nothing, right?
So I really haven't seen it.
And I think that obviously we live in the age of AI.
And there's tons of AI stuff that's coming, specifically for ColdBox, especially the
ColdBox CLI.
There's a new CLI AI commands that are coming soon as well.
And this will basically allow you to basically integrate all the GitHub, sorry, all the
copilot instructions and LLM instructions into your application.
So I'm developing all these instruction sets so you can very easily integrate this with
your LLM of choice to help you and assist you in building.
plug in, play and get all the orders goodness to help you jumpstart your interactions with
it.
Nice.
So obviously a lot of focus on AI coming and Coldbox 8 kind of sets the groundwork for all
the tooling that's going to come.
Nice, nice.
I'm going to interject really quick.
I want to give a shout out in the Chat.
got our buddy Scott Steinbeck saying hello to everyone.
We got our buddy Kevin Wright also saying hello and Scott has a question and Kevin has a
question.
So I'll start with Scott Steinbeck first.
you're there too.
So what, okay.
So what would be the workflow of taking a ColdBox app in CF to move to BoxLang?
Cool.
So we're actually documenting this because we have done it already.
We did a major updates of all our internal applications from CFML to BoxLang.
And I can attest right now.
Yeah, so CFCasts and the BoxLang Academy, which obviously were written in Lucee, CFML, are
being certified at this moment by our Salvadoran team to basically release, hopefully, in
the coming days.
So we were done with the migrations.
I can honestly say that I don't think there were any code changes.
I think the only code changes that were done were when we were discovering some bugs,
specifically with file uploads and some intricacies with HTTP posts, but everything is
resolved in BoxLang 1.6.
So we will document everything, our process, everything that we did.
And in all honesty, the majority of the work was basically updating the server.json to
add BoxLang and the right modules.
I believe the Salvadoran team has not made any code changes that are drastically for
migration.
So in true essence of what we wanted is true.
It's the drop-in replacement for your CFML.
think one point Scott might be asking as well, if I'm understanding him right is, could he
take some of the CFCs say run it through the transpiler wizard, whatever, and end up with
the BX files with all the BX BoxLang syntax.
And I have any CF stuff in there at all.
It'd be pure 100 % native BoxLang
Yeah, you can totally do that now.
mean, and the transpiler basically takes care of all the rules.
if you, and this is something that we're going to do as well is that once we're done and
we've approved it and we've certified it, we're going to be transpiling it and CFML will
be bye bye.
And then everything will be BoxLang Right.
So, so yes, the, the transpilation portion is your one way ticket to get off the platform,
obviously, definitely.
And that's something that we will do.
The only caveat that I saw,
that required a code change was date formatting.
And this is kind of the vein of Jon's existence because Jon Clausen has been dealing with
day formatting.
I think that by this point, he does not love it.
But the sad thing is that the CFML engines took their own kind of approach to day
formatting instead of what Java offers to the extent
And that's okay if you want to enhance, but I don't think it was an enhancement at all.
I think it's just more confusing.
It's always been confusing that way.
So the date patterns that Java offers natively are preset, right?
And CFML does not adhere to that.
Not in all of its extent, right?
Both, both.
one of those bugs was mine that I filed.
it's like ACF does one thing, Lucee does another, what the heck.
Yeah.
Yeah.
So especially dealing with months, It's not that months are minutes, right?
So when you and that's mostly when you use date time formats, right?
Capitals versus lowercase versus, yep.
That'll get you.
that was one of the caveats that we discovered.
Some caveats that we have seen have been around date parsing as well.
CFML has been a history of being extremely too flex when it tries to basically do date and
time matching, which is really a performance issue in CFML.
Even when we did the Mementifier to create JSON dates and timestamps, I skipped completely
the CFML portions because they were too slow.
So this is something that we have documented and we will be releasing as a blog post maybe
or maybe added to the docs.
It wasn't a lot to the truth.
So the docs might just be a page.
ah
so I think my understanding was that for the compatibility mode, there might be some more
flexibility on what will be allowed.
But once you go to straight BoxLang we're going straight Java and that's, that's just the
way it is.
Cause that's how it should be.
ah
the CFML compatibility mode, you might not even be need to address this, right?
And that's our intent, right?
If you're using the CFML compat module, the intent is to be compatible.
There might be edge cases out there.
there's another one we discovered about visibility scopes uh for static, which is
something ridiculous that I didn't even know Adobe had introduced.
And basically, it's like if you can add private or public to the static scope,
which is basically, Brad, don't even get Brad started on that one, but it's basically
irrelevant for our design patterns of CFML because static is a scope, right?
Everything's scope driven.
And when you say public or private, you already have scopes for that, right?
Variables or this.
So, and they don't go there, right?
So it's like a private static and a public static.
Very confusing, but anyways, that compatibility in the parser,
It's a tricky one and that one we basically discovered, but we have not addressed, we
documented it.
It's a tricky one because it involves the actual parser notations.
So I don't know how we're gonna deal with that one.
Let's put it that way.
But that's the only tricky one.
a, Kevin had a question earlier.
Well, AI do it correctly when we integrate all these AIs.
my kind of impression with the AI working with Claude especially is trust, but verify.
It does a pretty good job of getting you there.
might not be a hundred percent perfect all the time.
I know Brad likes to comment every time ChatGPT gives them something wrong and he's just
like shakes his head at the clouds.
But,
Well, you know, the AIs are, the LLMs are designed to basically what's called
hallucination, right?
So they're designed to hallucinate.
So that means that basically if they don't know, just, yes.
So the LLMs, if they don't know the full extent of something, they just hallucinate it.
And that's the term they use.
And it will basically make things up, right?
It will hallucinate because it doesn't have enough data to actually corroborate in order
to build what you required.
So, yes.
I've asked it before.
Can you tell me about blah, blah, blah?
And it gives me info.
I'm like, Nope.
If you go to their site, they do not offer that feature anymore.
So what about this?
And they'll come back.
Oh, you're right.
I missed that.
Okay.
Here's the correct info.
And it's very interesting how it goes back and forth.
LLM AI hallucination is a thing and you have to be aware of it.
And that's why, you know, as a developer, you know, these are tools, obviously, it has
made me a thousand times more productive than I ever imagined.
That's a fact.
But it also, I cannot blatantly just use it and not verify, like you said, it has to be
verified.
It has to be verified.
and, and
through experience of all the work that we've done with AI right now, you also have to be
careful because the architectural decisions it makes are not sometimes the best ones.
So finding bugs, fantastic.
Patching little bugs, fantastic.
Designing a system from scratch, it's gonna basically give you one opinion and one
approach.
So I think that for me, the best thing is to iterate.
Iterate and iterate if you're starting something from scratch.
Use it like,
Ironman like Jarvis, right?
And you're asking things and you're based and that's how I feel, right?
It's like, no, how about this pattern?
How does this pattern look?
Right?
And this is how I use it.
I think this has been through experience the best way I have adapted to use the AI tooling
in order to bring sanity because sometimes it just goes on a tangent, complete
hallucination and giving me something that actually doesn't even compile work.
I kind of feel like we have a whole separate webinar or even a video or something just on
how you use AI, practically using AI to do your development.
think people would really be interested in that, not just develop, but also how you're
using it a lot for writing documentation, right?
Or stubbing it out and then cleaning it up.
that in all a reality, BoxLang would never exist if AI was not in play here.
When we started, uh even ChatGPT and Gemini existed and very raw.
So AI has kind of been core to everything that we've done with BoxLang.
And the speed of iteration and where we are is because of this tooling.
It doesn't mean that it's perfect.
It just means that it's a tool, right?
So, and we use it extensively now for documentation.
have, you know, taught my model all the different things, even how I like code, right?
And when I want to write more pieces of docs, right?
I have everything prepared in my project and I can basically write, you know, a whole set
or whole page of documentation in about three minutes, right?
Obviously I have to proofread and update some of the examples.
Sometimes it hallucinates and right, it doesn't read correctly, but
I think that we can definitely do another podcast on this and maybe teach some folks on
how to do that.
the last quick question before we got to move on, because we're, there's a lot.
if you had to say as of today, September 16th, what's your favorite, AI, what would it be?
of
I think that I use all of them.
So Claude mostly is my go-to for code development.
uh for discovering really difficult bugs and for generating code and inside of VS code,
Claude is my agent of choice.
Now, uh GitHub, sorry, ChatGPT is my LLM of choice for iterating ideas.
So,
I do not do that with Claude I do that with with ChatGPT and it's the best to just hash
out an idea that I have or a new runtime that I want to develop or something that I want
to build.
It gets me from zero to 98 extremely fast and really well.
And then I execute that through Claude.
Gemini I use for kind of real time information and for image generation.
For me, has been the best for image generation right now.
Yeah, ChatGPT second, but um I would say that Gemini has been my favorite for doing
image generations.
And then GROK, I really have not touched the surface a lot on it, so I cannot really tell
you a lot about it.
But those are kind of my experiences day in and day out with some of these.
Well, we definitely need to do a practical AI just to show up.
And somebody doesn't know anything about AI to people that are half on the fence.
let's keep, let's keep going through lots of good stuff.
So I think the other big thing was CB wire five, I believe, is that coming out this week?
Yes, so we've been working hard with Grant.
So we have our training course next week.
So big plug there.
Yes, so ColdFusion Summit is in Vegas next week.
There's still space.
We still have a few seats left for a workshop with Grant.
So that will be Wednesday and Thursday for a workshop.
And we're basically going to be using ColdBox 8 and CBWire 5.
So Grant has been hard at work as well on the new iteration of CBWire.
making sure it has all the new goodness of LiveWire, as well as doing pure BoxLang
implementations as well.
You can actually write your components in BoxLang now, which the syntax is really nice as
well.
So for next week, we're going to be definitely using ColdBox 8, CbWire 5, the newest
iterations of BoxLang as well.
And so all the new goodies will be available to these students next week.
Nice.
That's, that is awesome.
And Grant is aware that it's going to be released this week, right?
Well, just like I'm aware of, ColdBox 8, right?
Let's just say I haven't slept in a while.
I bet, I bet.
All right.
let's move on.
There's a lot of stuff I had on there.
I'm going to kind of skip over some stuff and we can cover it on the next episode because
we got you and you're giving some awesome info to everyone here in the Chat and on the
podcast.
I'm going to go down to, BoxLang, BoxLang, BoxLang, BoxLang 1.6.
I heard there's a rumor that it's coming soon.
Yes, well, we're working hard.
I want to make sure I can cut something.
We're usually doing minor releases once per month, but maybe we're going to do a release
or maybe two releases this month.
There are just so much stuff that we've done for this release.
And I think it's just worth to get it out there.
I think it will be beneficial for the trainees and as well for people that are migrating
their applications already to BoxLang.
This has
tremendous amount of work in it in terms of performance.
Like I said, Brad has done an incredible work with his tech empowerment workbenches and
we're gonna be releasing that and you're gonna be seeing basically side by side how we
perform against Adobe and against Lucee, all basically using the tech empowerment
standards, right?
But we have been basically going with a fine tooth comb on everything
Because obviously you don't pre-optimize, right?
And we basically, our focus was get to market, right?
And then we're basically iterating and getting better and better and better.
And I honestly say that this amount of work that specifically Brad has done on performance
will blow people's mind on how fast the engine is right now, even with database
connectivity.
Nice.
So with saw the performance improvements, I've heard about it.
I'm really excited about it.
any other kind of big features being released as well?
Not that that's not big enough, but anything else with a one point say, I know some bug
fixes and some little things you're finding in here and there any other like killer things
come with it.
Well, we've already done about 57 tickets already on this three weeks of work, right?
But performance was a big one.
So the performance has been a big one.
Compatibility has been the second one.
Another one is synchronous programming.
So we've basically done a tremendous amount of work for synchronous programming and
parallel executions in BoxLang.
So you're going to see virtual threading everywhere now.
Easy to work with.
Easy to manage.
I've also added now into the new class basically called the BoxLang executor, which is an
enhanced version of executor services that would be available now at 1.6, which includes a
lot of AI prompts internally as well to get statuses, to get basically health metrics, and
to get statistics.
well, hang on a second.
So you're saying you've got AI enabled within the BoxLang to be able to tell you the
health and stats of what's going on?
Yes, so not through AI, I'm preparing it.
So yes, so and this goes into basically setting the framework for the BXAI module, which
is going to bring the MCP server native integrations.
So the model context protocol integrations are coming to BXAI and the actual language
itself will be an MCP server for BXAI.
So basically we're adding all the different inputs inside already
So we can ask your application, hey, how is my executors doing?
And it will give you a full report of how your application is doing.
And this goes into a new module that I've been iterating it called BX Orion, which I
really haven't shown anybody.
But this was basically my idea of how to graphically represent executors threading all the
Java metrics that we've done visually.
So this is something that I've been iterating in ChatGPT.
Mm.
we're going to be targeting that hopefully soon.
And this basically is preparing the runtime to give all these health analysis and
information so any LLM can leverage it and basically have real-time information about your
application.
So a lot of things in that area being worked on inside of the core, which is very exciting
as well.
A lot of documentation has gone out, which
for me it's also part of the 1.6 as well.
So tons of stuff coming, so that's just the synopsis of a few things.
That's awesome.
don't think I've heard about some of those things.
I can just imagine in my mind the dashboards that you can create to see the health of your
app at any given time.
That is.
and I don't know if Brad showed you the new WebSocket clustering that's coming.
I heard about it and we're using it with a, with a client project.
And so yeah, it's, it's really cool.
know, along with all the new stuff there, I think there's something about starter
templates.
Is that.
So that's more that's coming there.
So with the WebSockets, it's a lot of work that Brad's been doing to bring clustering to
BoxLang.
And this will allow you basically to do easy WebSocket clustering.
And we're building also a dashboard for it as part of the BoxLang Plus and Plus Plus
subscriptions that you can basically visualize all your cluster services and how they're
doing with WebSockets.
Brad even went to the extent of doing STOMP patterns.
So you can do design patterns within WebSockets.
mean, really fancy stuff that we're going to be releasing.
And BoxLang 1.6 basically was the impetus around all of these things.
And that brings us to the starter templates.
So this is something that I've been iterating as well.
And our first template is ready to go pretty soon here, which is basically how we're
building the BoxLang admin.
And it's the desktop runtime.
So we have another runtime.
coming in BoxLang 1.6 and that's the desktop application development based on Electron.
So now uh any developer is going to be able to bring and create basically desktop
applications with BoxLang very, very easily in less than 10 minutes.
And we even have some of the series that we're going to be doing to teach people how to do
it uh very easily.
And the new runtime also comes with a package and this was very important.
And that means that when you're ready and you've done your desktop applications, it will
package the BoxLang mini server, which basically powers the desktop application, with an
included JRE and produce a single binary for you.
So this means that, yes, so this means that when you work on this Electron desktop
template, it'll package it for you.
I built the packager.
So basically it'll just say, you know, BoxLang package, and boom, it'll basically compile
all the source.
it will basically download the mini server version that you're working on and it will
download the JRE and it will actually produce a package for each platform, right?
Whether it's Linux, ARM, Intel, Apple, et cetera, right?
As part of the process of Electron.
And this was important to do a turnkey operation here.
So right now with one single command, you're gonna be able to produce your desktop
applications built on BoxLang with BoxLang mini server.
in a matter of seconds, right?
And this is something that we've never had before, right?
just distribute it with, so how does that look like it builds the executable like for
Windows is an executable.
You just give it to someone and say, here you go.
Run it.
There's your BoxLang thing out.
For Mac, it'll do a DMG or an app, and for Windows, it'll do an EXE.
And I'm still working also to see if I can get some installers going so people can
actually do automated installations and stuff.
that also bundling databases with that, you still need to have that as a dependency.
No, it comes with a SQLite database as well.
So it comes pre-configured with a SQLite database.
It comes pre-configured.
I did a little framework on top of Electron as well to abstract a lot of things for
developers so they don't have to deal with full Electron.
So you have a tray menu module, you have a app menu module, uh icon sets prepared for you.
Actually, the CSS is based on SAS, and I've actually developed the SAS to build desktop
applications based on a BoxLang and theme.
So you don't even have to basically leverage anything.
You can go off the bat and use our theme basically to develop your applications.
Obviously, you can choose whatever you want.
And I really wanted to make it turnkey, right?
So when is this going to be available?
Cause I see people are excited about it.
I'm excited about it.
I've been hearing snippets here and there.
I know it's something in progress and if you had to guess, weeks, days, months, what?
announced it.
If you want to fool around with it, I'll put the link on the Chat so people can start
working on it and start basically looking at what we're doing.
It's on GitHub right now.
It's boxlang-starter-electron.
Please note, it's still raw.
There's still things that are missing.
There's still things that I'm working on, so it's not completely there.
But it's getting there.
Might be a few changes here and there, but the idea is that all these starter templates
are going to start popping up.
And this one is a real runtime, right?
There is a lot of stuff here so people can actually build desktop applications easily.
We've also done a CLI starter template, and that's going to go there, BoxLang on starter
CLI.
Same idea, same concept.
One single runtime will be produced.
So people don't have to worry about distribution.
So it'll basically package.
your CLI application, it'll bring the JRE, it will bring the BoxLang binary of choice, and
it will package it for you.
And then basically this goes as an app or an EXC and will take care of all the packaging
and distribution for you, right?
There's a Tomcat version as well, so you can do BoxLang starter Tomcat for those Java
developers out there.
The AWS template, it's now called BoxLang starter AWS, has been completely revamped as
well, and that one's available there as well.
There's a starter mini server and a starter CommandBox And these will have sub iterations.
So for example, this Electron one, this is raw BoxLang, but it will come with a setup.bx
that you will run once you basically clone it.
And it will basically tell you, do you want to just use vanilla BoxLang or do you want to
use ColdBox And basically, it will build the ColdBox portion.
Wow.
That's, I asked this a little bit earlier, but what other languages are doing things like
this?
I feel like I don't hear about other languages doing what BoxLang is doing and what we're
trying to do here and all these runtimes and all the, functionality.
And it's almost like just a plethora of goodness.
Yeah, I think that the only language that I know of that is multi-runtime, of course, is
Kotlin, right?
So Kotlin takes the lead on being the first multi-runtime language.
So they do some of this stuff, but they don't do any of these starter templates or
anything.
The starter template ideas came from our own ColdBox application templates that we've been
working on for a long time.
And also Spring has a lot of initializer templates as well.
They also called them starter templates.
So Spring Boot, basically to clarify, Spring Boot is the one that creates these starter
templates.
And the Java developers have been all around tooling.
And this is something that is important for us to get out there is to give the best tools
to our developers so they can be fast and they can be mean and they can develop these
applications and now go literally anywhere.
And we have not been able to do that for years, right?
Wow.
That's fantastic.
I feel like we kind of need to start like a, channel in the Ortus team, the, or the
BoxTeam about like Ortus labs or BoxLang labs, or as we're doing all these really cool
alpha things, getting it out there for people to kick the tires and play with and, kind of
see what's coming.
And cause some of the stuff, honestly, I, I didn't hear about some of this until you just
said it.
I mean, I knew little snippets here and there, but wow.
these are some of the things that, you know, when my trip to Japan, all that inspirations
came from there, right?
You know what, I'm going to ask about AI.
So you said there's some updates to BXAI and then I want to ask you about Japan.
Cause I really want to hear about that.
so the starter templates to finalize them are something I've really been tinkering on.
And the desktop one has been what we needed because this is for our admin, right?
So our BoxLang admin will be based off of this.
So we had the works and I wanted to open source this and make it available as a run.
Now, BXAI, so lots of work coming as well BXAI in the next couple of months here.
Embeddings are coming.
So this will allow you basically to tap into embeddings for you to be able to use RAG and
to use also uh vector stores, right?
So important to allow that.
that opened the door and that was contributed by Curt Gratz and it's still in PR.
So it's coming.
I'm still going through that uh PR release.
Now the embedding is dependent on each of the LLM.
right, to give you, and embeddings is basically just key value pairs, right, that
basically go into a vector database and it gives you a mathematical vector representation
of the input, right, so you can do basically calculations on how near things are when you
search for things, and then you can do things like search or you can do all kinds of, you
know, different type of things with embeddings, right, but it doesn't give you storage,
right, for that you need a vector store, right, and there's many vector stores, like
Postgres has one, right, and there's like five different
So what I've been working on now that embeddings are getting into place is to also add an
inline vector store as well.
So this will be completely optional.
So if somebody doesn't know how to work with vector stores, it'll come with a vector store
for you.
So you're going to actually start storing all of these key value pairs and basically do a
real time data searching and contributions with those embeddings in your vector store,
private, completely private.
And this is important, right?
There's also all kinds of vector stores that are SaaS services as well nowadays, right?
And people will be able to tap into that as well, right?
So that's one thing, embeddings and vector stores.
The other one is memory.
This is something that Jacob and I have been pondering on for months that is important to
add.
And this will allow basically BoxLang developers to create their own agents and provide
memory.
like if you go to, you know, ChatGPT or
Grok or any of these LLMs and you just go to the Grok console, right, you don't get
memory, right, because that's part of their offering of their agent, right.
So, and this will allow basically developers to create their own solutions and we will
give them that framework for short-term memory and long-term memory, right.
So we've been working to give you the ability for you to have, you know, short-term memory
of your chats, different policies around your memory, so you can do window policies and
say only one.
a certain amount of tokens as well or textual information.
And remember, the LLMs charge it by tokens, right?
So there are new biffs are coming as well for you to do text chunks.
So this will allow you to do string chunk and it will enable to pass a file or anything
and it will produce basically an array of chunked text so you can feed it into the LLMs in
chunks, right?
Cause you cannot feed them in one shot, right?
There's also a new token
BIF that's coming, so it'll actually, you can pass a piece of text or a PDF or whatever,
and it'll tell you how many tokens it's gonna consume, right?
Which is important when working with these LLMs, right?
And then that memory will be able to view as a developer, use it to create these policies,
depending on what type of agent you're working on, right?
And that'll be short-term memory.
And then if you have a vector store or some type of database behind it, then you can use
the long-term memory, right?
And we've built a summarizer as well.
So this will basically take your chats and everything and it will summarize them and store
them inside of the long-term memory.
So the whole architecture of memory is coming.
The picture is getting clearer.
We've been kind of iterating on this.
I've been iterating on this on architecture decisions on how to do that.
It'll come with a flexible store as well.
So you're going to get off the bat file stores, SQLite stores, JDBC stores, CacheBox
stores as well.
and different storage mechanisms available to you, right?
So you can decide.
And also another important one for BXAI is your own uh LLM, local LLM.
So I've created an abstraction already.
So you're gonna be able to download, in our case, in our first initial version, it will
have a Ollama as your language model.
And it will basically have a new CLI tool based on BoxLang.
So you're basically gonna say, in BoxLang, just AI.
local install and it will basically install the Ollama version, configure it for you and
then you'll use it just as your traditional Ollama, but everything will be local.
that's all open source, right?
It's all open source Apache too.
So anybody can build their own agents and have their own models without integrating with
ChatGPT or Grok or whatever.
And they can grow and scale basically their models and keep them private.
Right.
That's the whole point.
And then with the MCP servers, right.
Built into BXAI then they can even take that further and basically register tools and
register, you know, what's going to be exposed in real time to your custom LLM.
Right.
So all of this is in design phase.
The only implementation that we started first is embedding.
So embedding is in development.
And obviously with all these releases going on, it has taken a little bit of a backdrop,
but it is in development already.
The next step will be memory.
And after memory, we'll delve into the local element.
So lots of stuff in BXAI for developers as well.
So we got to have a webinar on this in the coming months, whenever it's all kind of ready
to go and show off.
And I feel like AI is something that obviously has been the big buzzword for about a year
or two now.
Developers I talk to you are still like, I want to use it.
I still don't know how to do it.
And what does this mean?
And, ah the other thing you kind of touched on is an important part is how much does this
cost?
Obviously with the open source, it costs nothing as far as tokens go.
But if they want to use Claude or ChatGPT, and having a real world understanding of.
Great.
I want to use BXAI in my app, but I don't want to get a bill for like, an unexpected
charge.
And so I feel like that's something we have to talk about too, at some point.
true.
maybe another, since you mentioned that, another aspect of BXAI coming as well is
multi-LLM instructions.
And you can do it now, but this will be a way for the developer to define LLM models in
your configuration.
because you bring up a great point, Obviously, using the LLMs with lots of information is
costly, right?
Day in and day out.
So you have to be aware of those tokens.
The idea behind this is that you're going to be able to define preset models with their
configuration and basically just like data sources.
Let's put it this way.
So you can have basically a ChatGPT configuration, a local configuration, a Grok
configuration, and then you can consume it with the AI Chat BIFs or AI service models.
And you can just say, hey, right now just want to talk to the ChatGPT model for this quick
iteration.
uh No, this is for the local model and so forth and so forth and your code basically just
leverages that name which you've configured.
Sure.
For the ones that specialize or the ones that, you know, it's cheaper and I don't need the
specificity.
that makes, that's really cool.
That is also makes a lot of sense.
Um, we are our time.
We got to keep going.
I'm, I could talk with this for hours and hours, but it is.
exciting times.
And again, here, or this, I hear snippets and I see little Chat messages here and there,
but some of this stuff is new to me and this is great.
I'm really excited and, uh, wow.
Okay.
Um, And let's just let me really want to talk about and just, uh, kind of keep going here.
No, I think that with all the BoxLang stuff, all the work that we're doing and coming, I'm
definitely going to be doing a plug for my BoxLang Developer Pack.
I'm getting back into VS Code extension development.
It was just a nice little way to install one extension that brings all the extensions from
BoxLang Development.
So I've always wanted to have that and it's available now.
And why not?
I created my own theme.
So I've always wanted to create a VS Code theme and Mr.
ChatGPT helped me on that one to kind of analyze and start doing different color palettes.
it comes with a nice little light and dark theme and all the BoxLang and CFML constructs
kind of pop.
So that's awesome.
Nice.
Maybe I'll switch to the BoxLang laying theme.
Well, I want to configure it so you can give your own accents, you know?
But on the terms of VS Code extensions, Jacob is doing a tremendous amount of work and
love on that extension.
And he's going to be releasing a few things probably today, probably tomorrow.
ah So, linter anyone?
So, he was working with Jason Berquist and we now are going to have linting natively on VS
Code.
So when you save file, that automatically will format it for you according to your rules.
So just like the other languages I've been doing for years.
uh
feel like that's always been a little bit of a hassle in ColdFusion trying to get the
linting to work.
And so just the fact that's going to be all, you have the extension, it just works.
Yes.
Everything is going to be integrated.
And actually, this is part of BoxLang 1.6.
So all the visitors and information to visit the AST tree to actually do the linting is on
1.6.
We're going to decouple that later for 1.7.
But for 1.6, that's going be a requirement.
And then this, because obviously it's doing it in real time, right?
We're parsing your file and do it in real time.
We're not using regular expressions here, right?
We're doing accurate.
depiction of your source code.
So uh linting, uh full rules for linting and formatting are coming finally.
So Jacob has done a tremendous amount of work on that one.
It's coming soon.
We're also going to be open sourcing our CPR tool as well.
And this is going to be available, uh obviously supported through BoxLang Plus and Plus
Plus.
And this gives you lot of intelligence to analyze your CFML code specifically for VAR
scoping issues, CF query param issues.
injection issues etc it basically scans resource code gives you all the information set of
his code and you can basically start squashing you know security
That is awesome.
Okay.
Moving on the next section of the podcast, we do CFML updates and we're kind of not too
many this time around, but there is a very important one that we do want to talk about.
And there is a security update for Adobe CloudFusion coming out.
Let me go ahead and get those links out there.
Okay.
So I want to get some links So it is a security update.
It's available.
We got the Adobe link we posted, but then we posted Charlie's link.
if Adobe would tell you, Hey, this needs to be updated.
Charlie will tell you what it is and why it is and all the things you need to think about
and all the gotchas and everything else.
And I highly recommend looking through his stuff.
And then the last thing was Pete.
I'm also kind of just giving his updates and I want to put that link out there so you can
just follow security updates from Pete as well from Foundeo So we really appreciate that.
And then as far as I had the link up, just closed it.
Um, so this is, uh, uh, P one priority one, critical security vulnerability.
Um, it is affecting 25, 23, 21.
Um, so it's not something you can want to wait on.
You are going to want to want to take care of it, but really look through, uh, Charlie's
blog post.
He talks all about it.
Finding the update.
Now I think Adobe said that they have not found any instances of this in the wild.
But you don't want to wait around and be the first.
You don't want to be patient zero for this one.
So go ahead, read through there, do what you got to do to the updates.
Yeah, I feel we can talk about this for a long time, but I just want to give the links to
people go read it.
It's a lot and, and do that.
Then was one more CFML update we'll talk about.
Chris Simmons, he's got a nice blog post I am posting right now.
Just talks about how to update your VS code settings or to set CommandBox as your VS code
terminal.
And so it's pretty simple.
Go to the blog post, I put the link out there and you can see how to do that.
And I think that's it for for CFML updates we're gonna talk about.
Moving on to events, we're going to kind of go through these very quickly in case you
missed it.
Leveling up your async game with Boxlang.
It's Jon Clausen.
That was last month.
And we've got the recording out there.
the September webinar is build an app in Boxlang part two, that's a September 29th at 11
AM Chicago time.
Um, I'll put the link out there for that.
he's got the first one he did.
You can check that out.
He's just continuing the series on what to do.
also big news next week.
You've been hearing us talk about it for months.
It's the Adobe ColdFusion Summit 2025 and the CBWire workshop.
And so.
Um, Summit 2025 we've got, uh, Grant Copley and Jacob Beers from Ortus will be there
speaking.
I believe we're going to have a presence there.
Yes, so I'll be there.
I'll be there.
I arrive Sunday and you can check our booth out on Monday and Tuesday.
So come by the booth and get some goodies and some cool stickers and some other swag that
we're bringing and come chat with us and tell us all the good stuff that you're using with
our products.
And I'm going to take my recording equipment as well.
So if you guys want to hang out and do some type of interviews or just, you know, I'm just
interested in talking to developers and see their experience with all of our products.
See how we can make them better.
Nice, nice, nice, nice.
And then we also have, think the day after, is that correct?
The two days, or the two days after I should say, sorry.
so Monday and Tuesday is the conference and then Wednesday and Thursday we're gonna have
our training so it's it's very affordable it's only a few hundred bucks and you get to
hang out with me and Grant it's a nice location for Bottega about ten minutes for the
world resorts and i get to hang out with us and build cool apps so we still have a plenty
of space right now i think we only not plenty but we have about four seats left so
Please hang out with us and get your ticket and come build cool stuff with us.
Okay.
there's too much awesomeness apparently for this episode of the podcast.
Um, coming up, we also have a ColdFusion security training.
online from Foundao, uh, it's Pete Freitag.
which is, don't know if he's done this before, but certainly the first time I'm, um,
hearing about it.
they do have early bird pricing use code earlybird25 ends October 1st.
If you want to go on his website and get some training.
And oftentimes if you go and see Pete talk at conferences, I'm sure some of the
information will be there, but also just things.
This is one of the sessions just do, it's online, it's virtual, it's easy to do.
Um, speaking of conferences, ITB 2026 call for speakers is open.
I don't know how many we've gotten so far, but, uh, I haven't checked either.
but yeah, ITB is upon us.
It'll be here before we know.
It feels like it was just here like three months ago and, and now it's like, ah, but yeah.
Yep.
then just really quick, the ITB videos on CFCast are there.
Hacktoberfest 2025.
that's coming up.
If you, it's been a favorite of many of Ortus team members.
Hacktoberfest, we should be, if they're not already, we will be tagging our Ortus repos so
you can participate.
Um, and we'll get that there.
And then finally, let's take a few minutes, Luis, and I want to hear about Luis and
BoxLang in Japan.
Well, it was very exciting, that's for sure.
This is the second time I go to Japan to do some work.
The first time was around Coldbox specifically.
And this one was around BoxLang, so I got to speak at two mini conferences.
One was for the AWS user group.
And it was incredible.
It was in this amazing Tokyo downtown offices.
like being on the Jetsons.
They had like robots roaming the, you know, whole, not the roads, but the actual offices,
you know, super tech inspired.
And it was great to meet a lot of developers, you know, from around the world.
And they were so excited about AWS Lambdas.
I presented the AWS Lambda runtime, how easy it was to actually build applications on it.
Some of the DevOps that were there.
actually were more excited about the CLI capabilities and the scheduling capabilities.
So that was very exciting to bring all these new technologies there.
And then the next day I was hosted by Datadog in Japan as well in Tokyo, an amazing
office, this amazing group of folks as well.
And this one was an event dedicated to us and that was amazing.
So they gave us basically a whole room, a huge tech room.
And we had over 40 people.
And it was a mini conference basically.
We had three hours of BoxLang content.
So that was exciting.
So it was basically three hours.
I basically wanted to faint after.
But we did an hour and a half presentation.
We did 30 minutes of kind of just hanging out, having drinks and pizza with the folks.
And then we basically moved into just coding.
So I kind of showcased how to build the.
RESTful API with ColdBox and BoxLang and how you use the VS code extension, debugging.
And it was just great to get feedback from developers that have no background basically on
anything that we've done and present something new and that they were very moved by it in
order to start integrating them.
Yeah, excited and getting to work with it in their applications.
So uh it was great to be evangelizing and going through all of these communities.
and presenting everything that we've done.
It's just a huge validation of all the hard work for all of us here at the team that we're
in the right track here.
And it's just, it was really special.
That's awesome.
And one of the more important questions, how is the sushi?
Unbelievable, unbelievable.
Japanese food and cuisine is for me the best in the world.
Best in the world for sure.
Best in the world.
Love, love, love Japan.
Everybody should go and visit.
So I can't wait to go back to be honest.
That's awesome.
And when BoxLang takes off, like Gangbusters out there and we have ITB Japan some year,
I'll be out there.
I'll volunteer.
I'll be there.
I will go.
Let's do it.
Okay.
This was a much longer episode than we normally had, but then we don't normally have Luis
as a cohost.
And so you, you know, you're very busy.
We appreciate you taking the time.
I know you're always busy doing this and that and everything else.
I don't know when you sleep, but apparently you do.
And so, uh,
Thank you for that.
I'm also want to thank you to all the people in the Chat.
Thank you to all our Patreon supporters, all these people that are personally supporting
open source projects, initiatives like CommandBox ForgeBox CodeBox ContentBox TestBox and
all the other boxes, including BoxLang All their contributions also fund our cloud
infrastructure for things like ForgeBox and CommandBox .
And so if you want to support us, you too can be a Patreon.
going to this link, patreon.com slash Ortus Solutions.
And then if you want to support us, we've got different packages.
And Luis, you to tell us what they'll get, branch packages and up.
you know that i did not some nice little goodies and they'll get a CFCast and uh...
learn BoxLang subscriptions as well and uh...
it's just a huge help for us if you support us in that area we we do all week a lot of
open source we believe in open source and every little bit helps and especially now if you
if you don't want to be a page just convince your your bosses to buy licenses from BoxLang
that's a huge deal for us and helps us uh...
maintain the language maintain all the over three hundred fifty libraries that we manage
350 holy cow
350 libraries that we manage.
Probably more now to tell you the truth because that number came a year ago.
So I'm thinking we have more than 350 at this point but we manage a lot of them and I
would say 90-95 % of that is open source.
So you know if you there's lots of ways to help well you know if you believe in us and
want to be part of this mission, buy some licenses, convince your boss and you're gonna be
seating in good ground and yeah we're gonna be building lots of
great amazing tools that are coming.
So we have a huge roadmap for all the stuff we want to do.
Just to put it in perspective, BoxLang officially launched in May of 2025.
It is September of 2025 and we're at 1.6 with tons and tons and tons of cool features.
So just imagine where we're going to be May of 2026 at ITB in DC.
Yeah, and you know what's the kind of scary thing maybe for competitors?
Is that we work on this part-time.
uh
a lot of our own clients and we have to keep the lights on and all that.
buy more licenses, man.
Imagine what we can do when we're full time.
Imagine that.
Yep.
Awesome.
Well, anyway, thank you, Luis.
Thank you everyone for tuning in.
This has been great.
Let's do this again real soon and we'll see some of you at summit and some of you on the
interweb.
Take care.
Remember, the Ocho, the Ocho.
uh
that, I'm thinking already, we'll talk.
Okay.
it, we need it.
Alright, thanks!
Join our newsletter
Switch to Modernize or Die ® Podcast - SoapBox Edition - Switch to Modernize or Die ® Podcast - Conference Edition
Music from this podcast used under Royalty Free license from SoundDotCom and BlueTreeAudio
© 2019 Ortus Solutions