press go live button and need to verify on YouTube Fox feel free to verify as well it's not there yet not there yet [Music] coming yeah it says waiting for Victoria metrix which is kind of strange because oh no I see myself that's good okay so uh looks like we are live are my slides uploaded now it says they're processing in the background AR should should be able to see those slides yes I can see them okay okay so um welcome everyone to our Q3 meop in 2024 um as usual we start with a small warm up like five minutes before the first slide deck you can find the agenda in the bottom in the description of this YouTube stream uh yeah so today what we will have today we will have updates on the Victoria metrix uh products like Victor metrix itself uh data sources uh operator Etc we will have updates on the Victoria locks we will have updates on Victoria uh metrics in the clouds we will have updates on anomaly detection and also what is super cool we will have two user stories one from our guest Roman noov about Victor metric and another story from Maas Palmer shame yeah so uh watch till the end to to watch the user stories okay um we also usually start uh to warming up from introducing ourselves and telling something about weather or location so my name is Raman U and I'm currently in Poland it's pretty chil right now about 12° or something like that but otherwise weather is fine I like it much better than hot matter let me be the next one I'm in the same place where Roman is uh it's up to the person about about the weather uh current one uh I wish it like if it's slightly warmer that would be really nice and uh next week should be exactly like this and I'm waiting f it but yeah raining about 12 Celsius it's nice but for me 15 17 20 much more comfortable let me continue then okay friend if we are contining by moving from Poland to other countries so I'm also residing in Poland but in different city of Ros and here we have 50 Shades of Gray and also temperature like plus s or plus 10 so I really like this weather when I'm inside so hope it will remain the same because we get rid of all the insects that we have and it's a good time golden [Music] Autumn it's more like your back end developer than front end developer okay yeah bad joke sorry I'm in Iowa and it's like the two weeks of the Year where it's not too hot or too cold it goes from being like up to 40c in the winter to like minus 20c in the OR Plus 40c in the summer and minus 20c in the winter and then there's like one week in the spring and one week in the fall where it's like perfect so we're in that week right now you better to take forever yeah day offs I mean aren't you my boss so I get to take the day off after this yeah yeah just a promotion we have unlimited vacation policy so it's up to you we can see one of your bosses in the background at this his tail I thought she was our Junior chaos engine breaks things yeah and I'm I'm JJ or I'm um based in Dublin isand so it's 50 Shades of Green over here and it's always more or less the same temperature throughout the year which is around 15 degrees but uh which is the case kind of today um but it's a lovely day today nice and nice and sunny stability is great yeah a but too stable it would be nice if we had a few more fluctuations throughout the year as in you know the summer and the winter but we tend to We tend to just stay you know Middle Ground all the time hello um I'm in Paris right now and uh there is the weather uh is similar to Poland I think there around 10 or that's from 10 to 15° and there is no rain but it is cloudy all the day yeah but this weather is good for walking around uh the center of Paris okay uh Roma want to go next yeah yes so I'm I'm located in Ukraine and have almost the same temperatures you guys in Poland like around 10 12 rainy so usual standard usual with weather which is known as kind of the rainiest rainiest city in in Ukraine and the city was the the tastiest food I also think oh that's that's really arguable okay okay so um do you think we can start yeah yeah of course okay can you put uh my slide de a moment Roman put in slide and fox I will remove uh all except uh Roman from the stage yeah so um hi everyone I'm I'm starting uh this meup with the first slide deck about updates on Victorian metric products products related to metric and U what's new in Q3 in 2024 well um the first thing um I would like to uh meet our new team members uh we we have three new team members who working on engineering part of Victor metrix and I'm really happy that we got these guys to the team so we have um Jun who is uh not only a good engineer but also you probably already met him on the slack and on GitHub issues so he he's very helpful with answering with deep technical uh questions about Victor metrics about ecosystem with other observability Solutions because he has a lot of expertise on that and he also contributes to Victor metrics on the daily basis I also welcome Funk uh who is a deep technical go expert and uh you probably seen some of his articles uh published about go internals or how we use go inside the Victoria metric some tips and tricks and just explanatory articles about explaining how things work and uh for he is not so long with the team but he already found a bing goong library for regular Expressions so U yeah welcome Funk as well and uh thir is arom FSH you probably already seen him on the GitHub as well he already contributed I don't know more than 10 of Po request with Vari improvements and fixes in the Mi metric score so um yeah welcome uh to the team what happened in Q3 so uh last time we spoke uh if I'm not mistaken in June uh the Q2 meet up and since then we had uh three major releases um I'm not counting the minor releases but we have three major releases the latest release was published yesterday it is 1.14 and uh we also had updates on long-term support releases so you can see the latest version of ltss uh in the right column and we also deprecated 193 LTS release uh because it just time we moved to 1.12 uh LTS new line LTS release you can find more uh information about LTS releases by using this link so as uh was U discussed on the previous Meetup we remain focused on bu fixing for Victor metrics over adding new features so we we're trying to prioritize our efforts of the team on improving usability and reliability instead of uh breaking things with new features and um this is what we did in this quarter and we will discuss this a bit later and uh our next goal is to uh start having B weekly uh releases so to to have a release U every two weeks instead of releases that we have right now without like any schedule uh they usually happen in once per month but uh sometimes they are delayed sometimes there are more than one per month it's better to have a stable schedule uh so you can be sure when new Buck fix or new enhancement will arrive so new features what was uh added in the Q3 well uh yesterday we got support of multitenant queries in Vim select you may see on this screen that I'm selecting a query and I'm grouping it um selecting a metric and I'm grouping it by two new labels VM account ID and VM project ID and in response I'm getting two time series uh each of them uh having different label set for this tenant ads this is how it works right now so uh you can configure VM select to query data from multitenant point and you can use these two new labels right in the query filters just as any other time series selector you can use the same features like regular Expressions uh using ores and ends listing them in the queries U you can use them here to select tenants that you want to get data from you can select many tenants if you omit uh specifying these labels you will get data from all the tenants that you have in the database uh this also reflected in the tracing so if you enable tracing uh when you execute this query you can see what exactly VM select does and here for example VM select queries uh each tenant separately U concurrently uh and and then merges results and returns back to the user to configure multi tendency you need just to specify instead of like before for quering data from specific tenant you had to specify its ID in the URL like select zero or select one or select one semicolon two to specify project ID now you can put multi-tenant uh word instead of tenant ID and it will automatically start querying uh tenants uh or all available tenants on the storage nodes so on this example I'm um updating settings of VMI to start using multitenant URL by just put in this word multitenant instead of SL zero and this is how I can get uh response from results from multiple TS uh you also can use uh features that uh we have for other queries uh for additional filtering like extra filters and extra labels so for example if you have grafana data source and you have like 1,000 uh tenants in your storage but you only want to allow users to get access to 100 of those 10 you can just add extra filters or extra label which will limit the search query to this specific set of labels uh tenants here on the screen for example I'm um all the queries to multitenant end point will be additionally additionally filtered by VM account ID which starts from 10 and data from other tenants won't be available you can specify this extra filter in the VMI get params or in grafana data Source extra pars and it will work like that okay uh yeah and thanks for uh zahar for implementing this feature he put a lot of effort uh into implementing it into testing it and thanks to Nikolai for uh being a good reviewer and spending a lot of time to to make this school request uh alive to merge it eventually so thanks guys for implementing this what else we get we uh in 104 we also get um obh service Discovery support thanks to Jun uh who added this blue request if I'm not mistaken he is also working on adding more uh service Discovery services in in the future so thanks Ron for adding this um other changes I I will not go through the the changes that we have because we had 57 enhancements but again mostly focused on the buck fixing and reliability improvements in the in this quarter so during this three months we had six security updates 57 enhancement and 60 Buck fixes so I I recommend everyone who is using Nu metrics to get uh newer versions of metrics so you can get at least six security updates and other performance enhancements Etc you can get the full list of the changes of course in our SL change loog and uh we trying to document all um all important changes there uh we also get uh updates to our dashboards and alerting rules for example we now have panels for stream aggregation on VM agent dashboard we have alerting rues for stream aggregation we have other alerting rules which can detect if uh Victor metric services are throttled by some limits for example Etc so updating those is uh also important for stability of your installations and also there were many many enhancement to VMI um interface to to make it more to make it better and I like using VMI for example on daily basis I'm I'm testing all my things on VMI uh instead of grafana but yeah so uh what else for grafana we have two data sources victorial logs and victorial metrics data sources uh you can try them they are available on the GitHub uh mostly the developers who are working on these data sources are Dimitri and and Yuri you can find them on GitHub as well and ask your questions or report box or enhancements whatever uh they also had a plenty of updates during this three months so um if you didn't update those data sources for a while I recommend to fetch the latest release and try it there was a lot of work on the operator and On The kubernetes Stack in general uh for for the last couple of months there were improvements for uh service monitors for granular tuning of alerting uh and recording rules we also got VM M support and in general there were like three major releases and thanks thanks to andriichuk to Nikolai and to Haley for uh working on those upcoming changes what we expect to get uh in the nearest Future well uh is working on the addin puppet DB service Discovery uh which we already have draft the request for and I hope we'll be uh added soon to the Victor metric Upstream uh we also have almost finished support of loog qql uh syntax in VM alert which would means that you can um you will soon be able to to use VM alert for making alerting and recording crues with Victoria logs data source similarly to how you do this with Victor metrics but just using lql instead of metrix ql or promql um what else uh zahar also filed a pool request which U supports http2 client for cubern service Discovery and this is a pretty major change because um this reduces resource usage and the amount of connections established to kubernetes API from VM agent significantly you can check this PR and issue where users already tried it and reported the the gains so um I'm really I'm also looking forward to merge it soon probably uh maybe tomorrow maybe next week uh also we started uh working on automatic limit adjustments on the VM storage noes so um as you know there is a notion of query complexity uh in Victor metrics which basically a set of limits uh which you can put on VM select to prevent uh from serving excessive queries queries which select too many series or too many samples or just take too much time Etc but mostly uh those limits are there to protect your database and your users from queries that consume too much resources or return too much data and uh on VM storage uh site right now we're trying to implement this feature which would automatically detect what what should be the limits uh for query processing and this should be um this automatic detection should protect VM storages from excessive memory usage in far better way than just trying to pick up a number uh that you will set on VM select so there are three tickets for that um they are kind of connected and we uh I am I expect to start working on them in the upcoming weeks and ideally what we uh want to get is that you don't you won't need to set those limits anymore in VM select you can just rely on what VM agent VM storage was able to detect based on the available amount of resources that it has okay yeah uh that's uh mostly it you can find all the changes that I um covered by visiting this link so we document all the changes we our change lck you can find all the releases in our GitHub repos and also you can play with latest versions of grafana and of Victor metric and grafana data source on our playgrounds play graan Victor metric.com and play Victor metric.com um that's it can you reappear again yeah let me add uh Fred to the call and his presentation yeah uh one second I wanted to addit one word that if you have any questions regarding what we saying or showing you can ask them directly in the YouTube chat or maybe in slack or whatever Pingas and we will try to answer them in the end or between the slides yeah I also have questions right now I will ask in chat uh Fred let me add your presentation and let me remove Roman and myself from the stage one two three removed hi guys I hope you can see and hear me pretty well uh let me introduce myself my name is Fred and I'm working for Victor metrix anomaly detection and starting from my tech where slow moving from open source to Enterprise uh so we we anomaly which is our product for V metrix nor detection is a part of Enterprise offering so this topic will be mostly interesting for for the guys who already using Enterprise or want to try it but feel free to ask any questions if you have any and today I'd like to share some updates that we have during Q3 of 2024 and where we finished our Q2 quarter and what we want to do in Q4 so here is the little agenda I'll start with a snapshot of where we ended in Q2 of 24 then we'll briefly cover what Q3 brought to the table and after that I'll share some plans for recent future for Q4 or 24 um so what what we end on and maybe I need to to take a step back for those who are unfamiliar with anomal detection we have anomaly which stands for Victor metrix anomaly detection is a part of Enterprise and it's a product suitable for discovering anomaly detection in time serious data which is obviously Matrix data so it works as a part of Enterprise and allow you to detect different types and kinds of anomalies in your ter series data if you haven't already tried just give it a try I'll share all the links uh on the last slide and for those who are already familiar and using our product uh I remind where we finished so we finished on release 1.3 uh. 13.2 when during Q2 we introduced so-called presets mode when we already predefine a major portion of config for users and users need to specify only some important part like data source urals where to read or where to store the data and we introduced no exper preset versus 1.0 and as a spoiler we collect a major feedback about its usability and about its improvements and we're going to release next version in Q4 so stay tuned also uh as the problem with false positives arise in anomaly detection when some tool detect that there is an anomaly but actually it isn't we'd like our users to be able to constrain such uh false positive rate with business specific arguments like detection direction or minimum division from expected all of these links are actual hyperlinks so you can go and explore once you have the presentation also we introduced dis D mode to store anomal detection models to release Ram drastically and it will be helpful for resource intensive setups also we had the focus to introduce a new kind of models which are online so they can um be updated on streaming like data and they do not need to query too much data from Victoria Matrix so you drastically reduce the amount of data needed to actually fit those models and that lead to optimized resource usage also we improved our documentation and we include additional Pages uh such as frequently as questions or quick start also a new preset page and we started from H1 2024 we started our product updates blog posts so expect soon our Q3 blog post on product updates with more examples of how they help our users and uh what was grow by Q3 we we made a couple of releases and as Roman said we were mostly focusing on making all the changes backwards compatible and on back fixing and on unlocking additional coroner cases that our user requests us to do one of these were to overcome the limit that exceeds on server side like search Max Point per time series so our users are able to query longer queries from we anomaly site for example that's useful for for customers that that are users of Victoria metrics and they uh can't really change the parameters of the server also we we add query parameters like data range or a step so you can use multiple queries in Victoria metrix number detection config and tune each individually so by the end of the day you you query less data and you query only that amount of data that is needed for particular query for particular model so it improves flexibility of config that you can create in single config file also we improved performance on Multi core instances for our V reader for reading and data processing so you should expect a huge boost on multicore systems and U similar to what we done for model dump we introduced data dis dumps the same um the same intention here to reduce Ram utilization during the most heavy fit calls to our models uh so you can dump store data to local F system it's also supported in helm shs as for the models uh as as agreed that and as decided we we started working for online learning Direction and we introduced three new models uh two of them are direct replacement for offline models and one of them is a somewhat replacing to harder and more complicated models like profit which is offline and let me remember what is online that the model can be one trained and then consecutively fit and updated with new data points coming in so you don't really need to query longer historical periods to update the model it will be updated itself on new data and also we have so-called multi models which can work on multiple time series and an input and producing a single anomaly score output and we improve uh applicability for bigger setups right now uh with a group by argument you can specify some subdivision that you you'd like to to create a separate model for example if you have a different regions or if you have different instances or different countries which do not really interfere with each other you can still use multivariate models but each train on each subset so you still get the benefits of multivariate models and the context but you won't get some interference with the results and and as for the plans for Q4 we have two mejor Direction one of these is convenience of use so we'd like to improve user experience and we'll focus on vmu based uh graphical user interface so users can go and test wirus detection configurations on historical data and once they are satisfied you can just copy and start noral in production mode also we'll launch node expert to preset V2 with improved visuals and uh based on resource effective online models that we have already developed so stay tuned as we got a lot of user feedback would' like to implement it in the best way and also as we anomal itself and anomaly detection in general is a kind of complicated task for engineers we'd like to to create more educational uh educational articles and documentation of how to guides so you can and resit them and see how to make wh anomaly do its best on your data given some default data profiles that you can see during your day-to-day tasks and the second one is still resource efficiency so we'll be keeping improving our model performance and resource utilization on multicore systems and we'll add more efficient online models of different classes as a direct replacement for uh current and existing offline models like profit so user can optimize their existing configurations and get still good quality but with way less resource spended and I think that's it from my side here you have uh two links one is for product page where you can discover what's romal is best used for and second one is our extensional documentation where you can read about all all of configs components uh quick start guides Etc and that's it from my side thank you guys thank you I have one question for you uh when are you going to present any kind of demo of I I hope I will be able to present it with the help of new VMI so it's not only a presentation but a good visual so once we accomplish it in Q4 I'll will prepare some demo for a broader audience okay thank you it will be a good addition to new educational materials so users can not only read them but also feel free to join and see how they are utilized there is a question from YouTube comments are there any plans to support other collectors like botel collector Telegraph or blackbox exporter EXA exactly exactly we already had a talk with Maas to support Telegraph preset so stay tuned as long as as we're working on it I hope to to release it as well in Q4 but it's a kind of um not not confirmed plan but we already talked about different presets because preset mode um takes much burden from user shers and allows simpler setups and simpler configurations just out of the box so we hope to release new presets soon okay thank you let me Fred remove you from the screen as well as with your presentation Roman should I present myself for you to present your talk yeah yeah okay let me start and also let me remove it you from stage okay byee okay my Martin uh going to talk about uh updates in Victoria metrix Cloud so uh there are three major categories for me first is a lot of quality of life improvements uh we added Integra integration section and going to expand it heavily and also there was quite nice internal migration where we Chang it our cluster topology uh I will slightly talk about it later so first things a lot of improvements in our UI uh and that's exactly quality of life improvements uh so we change it overview page uh we change it overview of deployment page you can find a lot of information here you can use this information currently better don't need to copy just need to press measurements currently are better visualize so you can see what is going on better and react faster also we Chang it the I believe our main page right now that's creation of a new deployment single note cluster version we added some explanation about what it means to run cluster single uh cloud cloud providers information about diss uh size recommended capacities and things like that there will be a lot of improvements there as well like during the next three months but it's already quite nice it look looks much better than used to be next things also reor of the access talking better view more security we'll talk about security a bit later that's actually creation of the new token we starting utilizing this right side wiet heavily for Integrations and for uh all helpers Works quite nice and looks also nice uh also user management new page feel free to play with it uh reor it we still need to reward couple of couple of them and after that that will have block a huge uh ability to change our flows and provide better user experience I hope I will talk about that our next meet up and this is our pipeline but currently looks much better and behaves much better next things update small update but important bate for all security security Fields currently everything is hidden added copy button the major Improvement here that uh if you copy this information the all actions uh are reflected in AIT lock and you immediately see that user this on this that's exactly for example Cloud admin revealed graan and box talking yesterday and this is applicable to all reveal actions in Integrations when you need the copy uh that's applicable to API key success tokens and will be standard for every other sensitive data uh Integrations uh we added new section integration section uh currently we have nine integration there um we iMed so you can divide them for the categories how you in data how to visualize read data and also what you can do with this uh so we have vag Telegraph uh peret section that's about how you want to ingest data from your site to our Cloud uh we support inal alert manager alerting cules recording rules if you want so you can run that in cloud and set up notifications for you in the slack teams other sources I believe everything that's supported uh right now internet manager you can specify that in configuration as well we have uh sections like kubernetes that uh guides you how to start uh monitoring of your kubernetes cluster using Victoria um metrix Cloud so and it's step by step gu interactive and as I mentioned previously we are going to expand this section heavily next one will be S again lless not Ser lless but G lless uh metric colation from AWS cloud which via firehost we already support it and Victor Matrix already supports that uh and you can do it right now but that will be definitely useful information for uh our customers and users how to achieve that uh that's actually example of the our typical integration this one is weag with a nice overview what it means and you can specify the deployment uh access talking uh this example for the for the kubernetes so under the hood it use our official Helm chart and provide the values for this cam Helm chart so what you need to do is just specify right deployments right tokens or create them but there is an option and after that uh run this command in your terminal console and in the end you achieve the integration with currently me but as I mentioned there are some of them next change actually that that was a huge change in journal one uh in Cloud we provide two deployment time single not in cluster version and uh we completely change it topologic how we run clusters and we provide separate deployment per purpose so that's is your deployment deployment uh there is no ning neighbors uh everything that happen with these deployments that's everything up to you and we guarantee that that we will be no interfering and influence from the other side uh we like we used replication factor and more straightforward topology when you have a bunch of the ins short storages M SX and replication factor to for the cluster but we switch it to topology when we run two independent clusters uh we we Imaging for the data replication between uh them with a load balancing before for the first aaable policy uh and why we why we did this first main ability uh when you currently previous sitation when you apply changes configuration uh you may know that uh some of the configuration are parameter uh CLI flux just the parameters and if you need to change it this definitely results in restart of the pods uh so and sometimes that can heavily influence your uh Engineers uh reading right flow and you want to avoid it and the current apology allows us to just apply updates in one cluster switch the loot and apply changes in another one cluster actually underhood there is a two flows and system decides if if it significant update or fast pass in both cases we uh remove one of the zones from uh read flow uh but uh in the end we also understand how how much time we need to for the maintainance and for user uh it result into zero downtime uh we already tested the things one test was not so good but all other tests was are extremely nice and current station I really like the current station of this if you want to replicate the schema in open source we have a Helm distributed chart to use it is is the same topology only difference in that in this distributed chart we use wh o as a Lo balancer and in this schema we use kuber kubernetes Services before in short and select as the Lo balancers uh we need to switch toos because by our observation it works better but currently it's fine enough next thing uh as I mentioned good part of topology changes currently it's much easy to operate in automate for the user uh there is no any observations uh in terms of down times or performance degradations uh but topology is more complex and we introduced additional layer of Agents uh and that's increased the resource usage again we have not raised it prices or things like that that's our internal things and we are willing to pay for stability but if you want to adopt this uh in on your open source or Enterprise installations please keep in mind thanks uh next one about other updates security improvements you may not notice that and this is good but a lot of the security things to pleas browsers gns security policies and stuff like that that's the precondition of uh certifications there will be a lot of more security updates that's improve the situation and that's good more documentations and more coming better explanation of what is going on um about new things I believe we already just just recently release documentation for our API and uh support in terms of what you can expect how to reach us and what it what it means for you uh and beside that we introduce couple new API public inter points regarding the updating and um regarding how you can operate with recording rules and alerting rules uh So currently you can optimize this process and we're going to add more apepi so actually what's next next first uh things Bing enhancements more visibility on what is going on so there is already information that there be even more information and more buttons to download if you uh missing some information right now feel free to reach our support and some of customers doing that and we happy to provide all the things but we definitely going to eliminate this step and uh provide the better functionality next things also based on the customer uh request we're going to uh Expose and allow you see deployment Healy and uh operational metrics outside of cloud as a regular metric with the gra boards so uh typical case for Victor metric itself not only Cloud but for cloud as well that's platform as a service and when someone uses your platform you want to understand the reading clot writing clot in Cloud currently you can see it in monitoring Tab and we provide all information on deployment site and you can also check and lo what is going on but besides that we also going to provide option to see and check this information on your site and and store this metrics somewhere close to you to have the fast and quick access um as I mention more Integrations next one will be a agentless AWS monitoring via firehorse we're also going to add more um integration regarding Reading Writing notification site divided by section that's that's it works really nice and definitely going to invest sometimes and more public apis so the entire idea to cover everything with public apis and release their for moreful um provider for that again custom request we're definitely going to fulfill it uh as far I have some time here on the top and want to under that we are hiring in Cloud we are hiring product lead so if you are currently looking for some opportunities feel free to reach us how to reach us public slack LinkedIn Twitter arum at Victor metric.com like right in in chat and in YouTube so any useful way I will definitely reply and uh useful link so you can sying up uh using sing up and read documentation about the cloud that's it from my side let me return Roman yeah s for for presentation next one is next one is Alexander if I'm not mistaken with Victoria LS update I also think so Alex yeah hello do you hear me yes I can hear you let me find your presentation actually I can I cannot find it I I just Ed my presentation to um okay in the end yeah nice okay oh okay uh let's start uh I'll talk you about what's new in Victoria LS in the third quarter of this year and also uh I'll cover uh the road map for Victor locks let's start with bu fixes because uh we prioritize B fixes uh at the top of uh feature requests and features um so let's start uh to look at some fixes in uh during the last quarter of uh the first B fixes uh uh which fixes uh execution of queries with all filters if these filters are applied to distant Fields uh previously uh such filters uh didn't return all the uh expected logs for instance uh uh this filter f one colum F uh selects all the locks is full V in the F1 field uh the next filter selects all the W wordss to all the locks with bar word in the field in the field F2 uh but uh when you try using uh combining the filters with our operator uh which must return all the locks matching is if the first filter or the second filter or both filters uh this didn't work properly inory loocks and this has been fixed recently um the next bu fix uh is related to proper calculation of uh field values unique and top pipes it has been uh discovered uh in Victoria Lo data source uh uh during implementing of Auto suggestion and uh showing this uh label filters for instance uh if he executes this query uh which should return all the field values for field level over the five minutes last 5 minutes uh it could return return uh uh inflated number of Ming locks uh another example uh the the second uh query we should return uh unique user ID values uh across all the logs with the error word uh could return um more user ID values is is needed because uh this user ID uh values uh didn't have this error work and uh as I said this could break out suggestion Victor Lock plugin and uh this has been fixed uh the next back fix is related to count Unix function which could return uh field values with zero matching logs uh and now it works as expected um the next back fix is related to stream context pipe uh in order to refresh your memory what is stream context pipe is a pipe uh which allows you selecting surrounding uh locks uh for the selected locks for instance if you investigate some uh St trace and every St Trace line uh is put in some in distinct log entry for because of some misconfiguration uh then stream context could help you investigating the whole um St Trace you just write stream context write how many uh log entries you want to see before and after uh the given log and it shows this uh locks should show this locks and there was a buck in Victoria locks which prevented showing locks outside the selected time range um and now this B has been fixed and for inance if you uses the following query which selects all the locks with error word on the given uh second time stamp with second Precision uh and then apply steam context uh to uh this um uh results of this uh filter uh previously uh the stream context couldn't return logs which do not belong to the same uh the given second right now it works as expected it returns any number of locks before and after uh the stream the selected uh log entry uh the next back fix fixes data inje with L St when using elastic search protocol uh it now it works out of the box and previously you had to uh Implement some workaround the next B is related to uh proper data injection of locks with time fied with SQL date time and uni time St formats for instance uh this example of scale date time format as you can see uh it has wide space between date and time and it it is different from this RFC uh 3339 uh format which has uh the letter c uh capital T instead of uh this white space now such locks with in this locks with time time stamps in this format are properly adjusted into viory locks and also Victor locks now can accept locks with Time Field containing uni time stamp in seconds uh there are also new B fixes in Victoria locks plugin for grafana as you can see there is a there is a screenshot uh from GitHub issues for loock plugin with the fixed box over the last quarter um uh and what about bu fixes for data persistence um there are no bu fixes for data persistence because uh Victoria locks data format has been uh designed uh in the uh beginning of the Victor Victoria locks uh development and it wasn't changed uh since this time uh and it had no any data persistent box which could uh lead to lose of your data stor data and the way uh zero breaking changes in storage format since uh June 2023 when the first uh release of Victory locks has been made and this means that Victor loocks is safe to use in production uh let's look at some features uh of Victory Lo the first feature is support for open sary format for data injection now victorx accepts uh data from open cetri collectors and other open cetri agents which support open cetric format for data inje for locks such as elastic search Lo or of oel collector uh another enhancement is the ability to pass uh data injection settings uh via HTTP heers uh previously such uh that settings could be passed only VI query arcs and some uh locks collectors lock shippers did support uh configuring the square in arcs uh and that's why we decided to add support for passing this parameters VI hcp heers for instance you you uh may pass uh the message field which should be used as the message field name which should be used as message Time Field names stream fields and so on so basically you can pass the same arcs parameters which previously could be passed VI common query arcs for this HTP uh data injection points uh the next feature is uh new HTTP ipis the first IPI uh which allows you uh building graphs from queries uh uh this API uh returns uh per step stats for the given query on the given time range uh and it RS um this stats in pritus compatible format so it can be graft uh uh on uh in grafana or web UI uh and this API is going to be used by built-in web UI and in Victoria plugin for grafana you can read more about this uh end point by this link and uh also at your own integration if you want another API is designed for alerting and recording rules uh this API just returns you query results at the given time this sounds like not like usual but uh the devil is in details so that this query returns this uh data results uh query results in the format compatible with pritus uh for alerting and this means that we alert can uh use this qu this API for uh building alerts and metrics from locks and uh this feature as Raman already said is in works and will be ready soon so alert will be able to be configured uh to send qu vares to Victor locks and generate alerts and metrics from this locks and store uh metrics uh in some systems like Victor Matrix you can read about this on this uh link the next feature is local time zone support um uh previously Victor locks uh could accept only uh time stamps uh data inje with uh time zone information and if you miss time zone information then Victoria Lo just rejected such uh logs now it accepts such logs and accept such locks in uh local time zone where Victor locks runs for example you can injust locks uh without with the time field which doesn't contain time zone information like in this example and also uh you can use um this uh time uh time stamps or time filters without time zone during queries and in this case the quaries are executed in the local time zone of the Victoria locks uh uh host this may be useful uh if you need uh investigating locks not at UTC time zone but at some local time zone let's look at Lo scale improvements uh the first Improvement is uh that you uh don't need to put this uh underscore stream prefix for stream filters uh you can just omit it and uh uh the filter will work uh as before uh this enhancement has been made in order to simplify immigration from users for users who run Lo graan Loi and how now they try victori locks because graan Lo has the syntax without this under stream prefix and another Improvement is ability to use this Dash uh character instead of exclamation character as not shortcutting log scale for instance uh minus f is equivalent to uh exclamation mark F uh query and it is equivalent to not F this means that this query uh searches for logs without this full word uh this change uh should simplify migration from elastic search for instance to Victoria LS because elastic search uses quy language in elastic uses minus instead of exclamation mark for negative filters so this should simplify immigration um the next improvements are related to query performance optimizations um now Victoria Lo performs some types of queries much faster than before uh 10 times and more faster you can look for instance if you perform some analytic query which doesn't touch uh This Time Field uh then uh this query will be executed much faster than before uh also uh there was a significant Improvement in performance and memory usage reduction for stream context pipe uh when it is applied to streams with big number of entries log entries such as millions and more uh and also there is an improvement for performance for queries with uh so-called INF filters multi exact filters uh which uh for big number of unique values inside this filters there there are a lot of improvements in webui for Victoria locks uh the top of improvement is the ability to show uh log distribution uh across uh top five Street lock streams with the biggest number of locks as you can see on the screenshot there are three different lock streams with the biggest number of uh locks over the selected time range and it's very easy to uh highlight and discover some issues with this loog streams we generate the most of the logs another important Improvement is ability to include enable Alo refresh option uh for query results so query results will be refreshed Alo refreshed uh for the G for the given interval uh as in the same way as Victoria all metrics does in webui another Improvement is addition of Vlogs cly uh this interactive command line tool for Victor Lo query uh it supports uh navigation of query history similar to the history navigation in Unix shelves so you can type press up down arrows for navigating history or you can press contrl r contrl s and type some uh prefixes or wordss and this navigates through history with this prefixes and what s uh there is a support for demon scrolling over lch qu responses so you can type execute queries which select literally billions of logs uh and uh uh this such queries will be executed as without any issues uh uh this loog scill c will show you only uh the first page uh which fits your screen vertical screen size and later you can just scroll up down and search uh over this locks with uh the common similar to less because underneath V loock clly uses less command for paging uh you can also cancel log running queries at any time just by pressing contrl C at any time and this instantly stops execution qu execution at Victor Lo site and this Lo scal uh shows query execution duration for every query uh this may help you optimizing query performance uh so give it a try it is available uh in the latest release of Victory logs uh let's look at the red mop uh of Victory logs the first point is cluster version uh there is a work on progress on the cluster version and uh it should be finished in the next months before just before the coupon in North North America in November in the middle of no November um but uh I I would suggest you don't wait for cluster version and try single note version right now uh because uh single note version of Victor LS can already replace note elastic search cluster um then can handle the same workload um this CID elastic search cluster um and also uh data storage format for Victoria single note Victory LS uh is 100% compatible with cluster version uh this means when cluster version will will appear you can easily migrate from single note version to Cluster version by replacing the single node binary with uh storage binary uh for cluster version and uh in this case uh uh this single note uh data will become data for a single Storage Note in cluster version and as you can see uh see before uh Victoria single note version of Victoria locks doesn't lose any data it had no any uh data corruption and data lose uh bucks uh since the initial release in the last year so it is ready for production so try it and the next important uh feature in our red map is a transparent archival of historical data to object storage how it works uh uh it should uh automatically archive historical data to object storage uh you can configure uh for how long you want to uh keep data in local storage and after that Victor Lo will automatically upload all the data to object storage and delete this data from local storage and after that you can query uh all the data local and uh data stored in object storage uh without any issues without any configuration it should work out of the box uh and fully transparently to the end user and the archiv data data at object storage is equivalent to data backup so uh when this feature will be implemented uh you you don't need to make additional backups because the data which has been uploaded to object storage is already uh can be treated as data backup and this data backup can be used for uh data recovery at other uh Victoria metrix cluster or Victor Lo clusters for instance now that's all thanks ARX for uh for your presentation there are questions in the YouTube regarding the slides so the first question is will Victor logs in the cluster version support recharting of the data between CHS if you for example increase add more storage no the answer is no and uh the reason is because uh recharging uh will uh may increase uh res usage significantly during rearing you need to move uh huge amounts of data between storage nodes this takes uh CPU time this takes network band this takes diso and this may hard current operations of and stability of the Victoria Lo cluster yeah on cluster yeah so the answer is not it will not be implemented okay as far I added controversial answer in the chat uh my question regarding this can I copy uh one day can I move one one day of data from one storage to another one yes Victoria locks stores uh data in per day partition and every partition uh is stored in separate directory uh and this partition contains uh the data itself and the index for this data and this means that this partition this directory can be freely copied uh between storage notes between Victoria LS uh single note instances and so on uh so yes the answer is yes so and it's currently supported uh so there is will there will be not any automated rearing but if you want to move one data from one storage to another one storage you make this SP yes that's that's correct answer uh yeah that's true but uh internal observation that when you move data this doesn't mean that you of course from the disk perspective you you somehow to reart the usage and you can uh evenly utilize the dis base but this may result into different resource usage of the CPU and memory because uh currently because data will be shed differently uh so and this may result into one storage should handle everything compared to other stuffff are other questions because I should we answer them in the in the end of the session or should we continue I think answering them right away is a better approach yeah the we don't lose context right now okay okay uh so the question is uh also I remember when you present do you have plans to implement support for kibana uh As I understood uh this question is related to uh query elastic search query language and querying IPI which is used by kibana uh so the question can be rephrased whether Victoria locks will support quering IPI from elastic search which is used by kiban and if if will support this quaring API this means that kibana can be pointed to Victoria locks and the answer is uh we don't know yet uh we didn't decide yet uh because uh we believe that uh web UI for Victoria locks uh will be uh much better alternative to kibana over time uh and we bet on this web UI for and also we bet on graph a plugin for Victoria locks U so let's see if we will see high demand from users uh we who want to use kibana for quering Victoria logs then we can reprioritize this feature yeah let let me also comment uh that's of course of tradeoff of the adoption versus our plans and if tomorrow we understand that uh we must support uh kibana and elastic sear qu language for their better adoption we may do this decision but currently we still believe that what we are doing we're doing it right and uh if if it's not so just let us know and of course uh from our perspective we understand that migration is a crucial part of every like it's essential part when you switch the technology and it can take uh a while so we want to decrease the adoption time as much as possible so that's one of the goals and it's already proven by victori metrix the next question is there any plan to separate access to Locks of different applications um that's good feature request but I think that this feature request uh is good for enterprise uh version of his terog uh yeah um so if you want this feature to be implemented in in Victoria Lo then please file feature request and we will uh then decide how whether this needs to be implemented isn't multitenancy yeah multi tency yeah uh yeah that's that's good correction uh Victor already supports Mendy out of the boxes means that you can uh separate different users uh into different tenants and uh write data to different tenants and select this data from different tenants and this tels uh are quite isolated the data from the tels are quite isolated but Victor Lo itself doesn't provide any alization uh between different tenants and you can implement this authorization right now with v Mouse um proxy uh sitting in front of this Victory Lo uh instance yeah so uh in theory right now you can uh build a system on top of Victor locks with different uh access for different uh teams using this m feature and using this uh we Mouse in front of Victory LS thank you next question any plan to add a process data source for Victoria LS uh yes we have such plans and uh I think we eventually add uh person for I thanks I hope you will Implement delete feature RPI for Victor lo I hope yeah yeah this sounds uh useful and uh frequent ly use it for locks and this feature will be implemented and please uh file uh feature requests for features you think are important for victory LS and this will help us not losing the fees and prioritizing them later question from my side soft deletion or hard deletion good question uh I think that uh initially this will be soft deltion because it's much easier to implement uh uh and probably later it will be hard Del the S Del the distin between s Del and hard Del is that during s Del the data itself isn't uh instantly deleted from this uh but it becomes invisible for quering and later this data will be deleted from dis during background operations Victoria LS this is much easier to implement and it works it will work instantly uh and compared to hard deletion and hard deletion can take uh undef amounts of time depending on the amounts of blocks in your system thank you and the last question from Anonymous user uh when are you going to ORV going to release pround for locks graan Source Andi fors I'm talking about uh logs we already uh have this playground for Victoria locks which is available at uh the linkplay vmc.com I tried posting the answer to this question in uh the chat but uh it's something doesn't work as expected and I can post uh this link let me post it uh to you in private chat and you can repost it in public chat uh but uh thisr has an issue uh it is based on uh source of logs with very low frequency uh this uh source of logs generates basically a few new log entries per day and the is very low and it doesn't all you um exploring the full potential of Victor Lo performance and Victor Lo Square language abilities and I think that we need to uh change the source of this looks for playground and I have an a plan uh to use uh GitHub archive archive for this uh particular case um could you repost these links from private chat to public because I cannot do this because of some bux yeah so uh the plan for this uh playground for Victor Lo is to use this GitHub archive uh which contains all the events generated by GitHub users uh and store them in Victoria locks for basically all the events or the full history of GitHub and in just the loocks every new day and all users uh the ability to query the locks with uh this playground I I don't see the links uh uh are some did you try posting them uh I posted the links I believe I have some troubles with my connection I posted the link and also I change it I you're currently moderator you also can post the links okay I think we should uh go to other slides thank you thank you Alex so who's who is the next Roman CFE if I'm not mistaken yep hello hello hello everyone let us disappear okay okay so uh today I'll tell some story uh of our you know usage of victoriia Matrix and how we migrated to using Victorian Matrix as one of our key component and my name is Roman I'm the product manager at perona and at perona we are we're working with open source databases uh making them running efficiently and and properly I would say and uh in today presentation we go through quick explanation what is what is pmm uh what's our architecture and the challenges we had as well as kind of why and how we migrated from pritus to the victoriia metrix to be one of our uh sources inside the product and which result we had and I need to admit that this this change happened not recently so I think the results I will be telling about are really time proofed so it's um have have we trust in those results already uh and let's let me go quickly over what is what is the pmm what's the the goal of of the tool so we created pmm as uh as internal Tool uh because we are mainly Service Company um supporting uh some of the major open source uh databases and we created pmm as a tool to help us monitor manage and kind of track performance of the databases in uh in different deployments and by different deployments and different databases I mean we support myql poql mon good youb and couple of proxies to to deploy those databases in different um topologies we also support different environments like bare metal some Amazon deployments including the RDS and aora gcp Azure and also we have this kind of the the way of monitoring databases uh different it can be local monitoring or remote monitoring when you don't have access to the uh to The Host machine where a database is running and well usually we prefer the the local monitoring when you have agent when you have ability to run some node exporter Expos collect the metric uh because combining the the metric from the database and from the host is one of the best best set of data you can have to to really understand the performance in databases like what's what's impacting it is it really the database or there is some disk problem or some other Hardware problem impacting your database and the pmm is kind of classical client server application where we have monitored system which uh kind of connected or interacting with our pmm client which kind of collect all the data scrape uh database so do all the work and then the data um appeared on the server where the user is kind of mainly interacting with uh and our pmm server also interact with some performance infrastructure receiving additional alert templates or some advisor so it receives additional intelligence from uh from our site or get some updates information and when we worked on the the initial architecture when we start uh building all of this it was based on the Prem and usual kind of pritu Stack uh kind of looks like that then uh it you of when you go to the operational problem it quite becom quite complicating to manage all these kind of elements um by yourself and that's where kind of in pmm we made this kind of the one tool to manage all the stuff through ruled all those the uh elements on the server on the client how to give how to start run them so that was working fine uh but then we we got to the next kind of level of problem and the network so not everyone who run in the databases have uh databases in some I wouldn't say unsecured environment but the environment where people run some um monitoring tools and the database usually different and database are usually much more protected and when we are kind of looking at the the set of Technology we support the different environment the databases can be running and their location of our agent and how we get the data it was going literally the multiplication of all possible options and it was all leaded to the hey here is the the next port or you need to open to have a collector running on the database and this this again uh was getting to really huge huge pain to the users like which port to open if you want to monitor MySQL on bare metal remotely right or if what what to do if if you need other configuration uh so that was kind of real pain when we um added more more support of more Technologies and people using that and expect it using the PM and expected of simpler user experience right um so that's where we start looking at their uh some different models of collecting data this the pull push discussion is started like how to make it easier uh to collect data send data and Al avoid this the the pain with with configuration and operation of the monitoring tool because the monitoring tool is usually to make your life easier uh and let you focus on the main uh kind of main system in in our case it's databases um so we start looking at kind of our of goals and what we want to change in this so we definitely wanted to keep simplicity and have user experience is kind of as simple as possible because that's the key to get kind of user adoption and let people easily work with with the tool we also realize that the flexibility in uh the models and having kind of pull and push option is something we we really need uh and actually that's also not something we can dictate that's what user in some cases need to need to choose so they they needed those uh f ability and the performance you know what we had at that moment was already good optimized uh because because our speciality is the database performance again we can't afford the monitoring tool uh be less performance than the databases uh and we also kind of have the limitation on resources in a way that whatever we do we can't consume much resources because the majority of our monitoring infrastructure I mean the the client side is running on database and again the monitoring tool can consume significant amount of uh resources on database because it's not it's not the main work for the for the database server to run the monitoring uh their main work is run the database provide users with data so resources must be kind of also saved not extended and that's where through there kind of selection of the options we came to idea of replacing promeal with Victoria metrix uh and Victoria metrix agent so to have all those keep the Simplicity and make it even easier have the flexibility of different data models um data injection models uh and and keep everything uh on the performance and resource side so what we what and how we ended up is we changed the architecture where we on client side we added VM agent so together with in the same kind of level as we have our uh exporters like all of those are uh open source uh Community uh exporters some of them we support and maintain like the mb1 uh and we had the is the pmm agent uh which was just the supervisor and kind of manager for all all exporters uh speaking connected to the our server part and kind of generating all the configuration for for exports on the server side we just kind of replaced pritus with Victoria metric um but we kind of just replace one binary by another wasn't the the real goal because we wanted to make it really simless for the users so how we we've done this transition is kind of I try to present it in kind of an explain on this slide so in some older version we had the promes running on the server site because the server site was the main complexity here and then we added victoriia Matrix to the server together with peritus and for for some period of time it was the process when the new data was kind of written and injected into the Victoria Matrix already while the victoriia Matrix was doing the reading data from uh victoriia Matrix and from pretus which was there and we kept this configuration for some period of time while the retention period for pmm it's by default uh one month until this retention period is ended so the all data is already in Victoria Matrix and then the UI is just reading everything from from Victoria metrix uh directly and that's kind of that's how the users got this through just usual upgrades of uh pmm and so for users that was just go to upgrade to the new version of pmm press the button on the UI get new version and they kind of magically got this the new uh new component running inside but um what what kind of what I would say we had at the result uh and and the user right so at result we got this flexibility in network topology for their um infrastructure so dbas or devops infrastructure people can uh decide how they want to to run the monitoring was it po push model that Will was up to them for configuration uh we also got uh thanks to the kind of their their architecture we got the ability to provide users with troubleshooting ability at the moment when the network had a problem right because previously what when we were doing just uh pool mode and scraping the exporters directly from pmm server if there are any problem on Network and between server and the database connection is broken or interrupted there was no data and it was much harder to understand what's happened now with Victoria metrix agent on the database side and scraping and collecting all the data uh when the network is restored we got all the data uh for this period of time and it was much easier to to understand what other stuff uh happened uh in the infrastructure so we got this kind of additional cool feature for for users to to use one uh at the same time we had no database or no dashboard changes uh from uh because we were previously using promql we everything moved to the metrix ql uh we later started kind of using some advantages of metrix ql just to simplify dashboards or provide more kind of easier way of understanding the data uh and we got kind of improved efficiency on the storage for their for the servers uh in general because of much more efficient usage on the Victoria metrix side and their last I think kind of the most um important Advantage I think was that there was no user complaint on that so no one kind of noticed that we changed something in our architecture we replaced some components moved from one tool to another but everything worked for the users and well for me as the product manager that was good because then we delivered all those previous uh additional features without uh losing any you of advantages people had previously so that was our kind of story with switching and using Victoria metrix inside the the pmm it's kind of all components and pmm are open source so we really really believe in that and stay in open source and I think that's also if there are any questions yes we have one question let me yeah I also have a question uh what do you use for logs for logs uh we now in so we for the for the for the observability story of course we we need logs and we right now in a kind of in a way of looking at options to to use for logs and of course we are closely looking at the Victoria logs for that uh because uh because it will be easier to integrate and that's probably will be our preference we right now testing and kind of comparing to be unbiased uh in this so when we will have results I I can share with you yeah can you share the competitors like well we mainly we can so in in pmm in our architecture in kind of what we have inside we also have the click house for for one of our components uh qu analytics and of course we using the grafana for the visualization so in our kind of selection list right now it's either Victoria Matrix locks um Loy or some tools based on uh click house just to reuse the existed uh storage but uh well there is I think um less standard set set of tools uh based on click house well click house also maybe the good option but for so so we will we do doing kind of some selection right now and I think in a in a month we'll have the the pass to go with with loog uh in in pmm finally okay thanks it would be nice to have a public Benchmark with vior looks somewhere yes we have a question from the YouTube chat so is there a reason you didn't use vml vml is a tool from Victor metric components which allows to migrate data from various systems to Victor metric so is there a reason why I didn't you didn't use v data from PR matric uh so we wanted uh our main kind of main goal was to find the way to of do this as simly as possible uh and avoid down times for the users because some of the users have really significant amount of the data they monitoring so we were kind of um looking for the ways to Not Duplicate data to not cause uh down times and not not do their a lot of migrations uh with the data so Yeah so basically making Victor metrics be able to read from pritu data blocks makes it more transparent for user as I understand so you don't need you don't need to have this preheat phase when something is copied somewhere conr resources Etc just for User it's uh you turn it on and you get access for both to the recently written data and historical data from promeets yeah yeah exactly okay okay thank you very much for for your presentation um thanks for inviting me thanks for participating yes we add Matas [Music] yep hey everyone can you hear me yep awesome so I'm going to talk about how I uh got hired at Victoria metrics by making my own monitoring logging system a bit about me uh I've been obsessed with observability for about five years now actually my first date with my wife I wouldn't shut up about how much I loved observability and there was even a beding pool at the office about if that was the only thing I talked about on my first date and they won um so that through a variety of jobs working everywhere from Tiny msps to large publicly traded companies um I ended up at Victoria metrics as a Solutions engineer I'm also a big fan of animal and memes so you're going to see a couple of silly pictures during this so along with my various day jobs I've had I've uh worked on an open source project called shiffon that tries to make monitoring and logging logging easier and open source for everybody I've noticed that a lot of well put together observability Solutions either required using proprietary agents and led to really expensive Cloud bills or they required using a kubernetes operator and the places I was working at at the time didn't have access to kubernetes admins or kubernetes cluster and I wanted something that kind of fit those use cases where kubernetes or expensive Cloud products wouldn't work it also tries to instrument apps automatically so there's an anible Playbook that and a Powershell script that will try to uh detect what's running on a machine while the Playbook or script is running and then instrument those things automatically it also comes with a bunch of pre-and graffa dashboard and some example alerts so that way you can get straight to doing the things that are specific to your business or your home lab uh rather than spending time creating a generic Linux host dashboard it also supports the ability for users to have their own configuration uh so their own Telegraph configs or their own uh grafana dashboards or data sources and under the hood it's using anible to glue together uh Victoria metrics VM off BM alert Loki for now and Telegraph Victoria logs is also optional along with the grafana on call product uh the other thing is I really wanted to have low requirements uh on the user so instead of having to have access to Cloud vendors or object storage or things like that uh you only need a Linux machine anible and to set a few DNS records to get an instance of shift mon up and running so how did this lead to my job at Victoria metric so I was using a lot of the Victoria metrics tools I submitted some feature requests to get better inflex DB support and I was watching all the conference talks and everything so I can learn more about it so I already had a very uh good feeling about Victoria metrics and it was one of the few places I'd like no matter where I was working I'd apply there um and shiffon also gave me a good chance to show up not only the fact that I have some operation skills but also some soft skills because I streamed some of the development on YouTube I've spoken at it at a few conferences so it gave me a good opportunity to demonstrate that I was a good candidate for a Solutions engineer since that has a lot of soft skills involved in it it also has helped even after I got hired at Victoria metrics because to learn about all the Enterprise features I implemented them to shiffon so now shiffon supports an nomal detection and add support for some of the um Enterprise features like down sampling and now it's also helping me move customers off of Legacy systems so there are some users that are on on older systems that charge per host or some other thing rather than the number of active time series and because I already have a good data set of common things like Linux Host Windows host containers and those sorts of things uh I can kind of map the two concepts onto each other like hey a Docker host is GNA have roughly this many metrics uh Linux host without system Dem monitoring is going to be this many metrics so it's continuing to help me even now and you could also deploy shifton with a single button um I imported this as a PDF so I'll have to do the demo later but this was supposed to be a quick video and uh before the demo I want to mention that Victoria logs is the thing I'm currently working on adding to this uh the victoriia logs data source is added by default and Victoria logs can be installed by specifying a few ansible variables the main things that are blocking me from just spending all my time uh working on migrating from Loki to Victoria logs is alert rules aren't supported yet but Alexander mentioned that that's going to be fixed soon uh the dashboarding experience needs to uh requires a couple of transforms to use with like pie charts and time series and the last thing is the Alert state for graffo alerts so in order to make grao alerting scale you need to uh send all the alerts dat to Loki rather than the SQL database since it can have arbitrary labels in it and that can that doesn't fit well inside of a SQL database so to solve that problem uh the alert say gets sent to Loki and there's not a way to do that in Victoria logs yet so uh let me present my browser screen okay so here's an example of a dashboard that mixes together the uh Victoria metrics and Victoria log's data source so one of my favorite things about using Victoria log so far is the ability to get at semi-structured data inside of another message so right now using Victoria Logs with Telegraph and the Loki API and the way Telegraph sends messages to Loki it wraps everything up in log format which is very difficult to deal with if you have structured logs as the message because you have different types of structured logs or nested structuring and that can be really tricky to deal with with other systems um but here Victoria logs lets you pipe the uh unpack log format to another operation and keep unpacking data inside of of it so you can get these really pretty tables here um another nice thing too is that it does work with pie charts it does just require those couple little transforms and I believe there's an issue to get that fixed in the near future and then after that I plan on porting all the existing dashboards in there as well and then do we have time to show off an anomaly detection demo I know aram's been wanting one of those for a while we do we have time definitely perfect um so to as an example I've been working with Fred on the anomaly team to have a preset for telegraph blackbox monitoring so both in my job and in my personal life people tell me that the services I run are slow but they don't have a millisecond or some observable definition of what slow is so anomaly detection helps me uh get a good idea of what slow is without non-technical users having to communicate what that means so this dashboard right here shows the current anomaly score uh sorted from high to low and anything above one should be considered an anomaly and it shows the expected value the minimum expected and some of the other values generated by anomaly detection at a high level overview and then if you scroll down you could select an individual service so we were looking at um blocky earlier and we can see here that there were a couple of anomalies for the service and then below that it will also show what the expected values were in the actual values uh this was covered in the H1 blog post for Victor or for VM anomaly if you want more details about that um and the other nice thing too since it's using anible as the abstraction you can use things like anible semaphor or just running on the CLI to deploy it so just two button clicks and you're deploying a shifton host that's all I had any questions yeah uh thanks for presentation um it's it's really nice I I didn't see uh the this before in the live I only saw uh your YouTube presentations your talk on YouTube recorded by it but I never touched it physically it's nice that you presented it um yeah it looks uh looks good and it's also great how you can compile the product uh by picking Open Source Products which are also like I don't know uh open source being open source is still amazes me because you can build like a whole product based on the tools that exist there in the internet and maintained by many people and by big communities yeah it's been really nice too because everything's it's like super simple to deploy like I think when I was researching migrating the metrics from influx DB um I think I had Victoria metrics up and running in like an hour cool and are all of those components that you use all of them written in go um I believe anomaly detection and grafana on caller written in Python okay so all the standard components that come with the default deployment are written and go some of the optional ones are um and then uptime Kuma is an optional deployment for blackbox monitoring and that's written in node but there's if you want to stick strictly to goang you can uh use the ansible Playbook to set up blackbox monitoring with Telegraph I see thanks yeah we have a question from uh from user on YouTube are there anomalies statically set or are they Dynamic based of historical data the anomalies are based on historical data and something that I accidentally glossed over is that there's a minimum threshold um so anything I decided that anything that isn't above 50 seconds for an HTTP or DNS query wasn't considered an anomaly because the DNS server response time is so narrow from the like low to the high things that were a reasonable response time were flagging as an anomaly so there's it's kind of a mix but it's uh based off historical data and Fred feel free to correct me if I say anything silly yeah PR has to be put on the stage in in order to comment hi guys yeah regarding the question about anomalies basically uh they are we monom used under the hood and we monom use statistical models and machine learning models and they both work on historical data and what also Matas was talking about that it's not only use some um only models under the food but it also use some business specific arguments that user can tweak to decrease the amount of possible false positives and it can be seen from the live view that Maas presented uh you you can throw them uh you can thr there um some upper and lower boundaries for expected values and sometimes real values they were exceeding this interval but there were still no anomal detected because Maas on his end configured so-called minimum deviation from expected parameter to be around 50 Ms so every deviation less than 50 milliseconds from what the model expected to be were consider it as anomaly that's why it wasn't like it as anomaly and it reduced the amount of POS positives drastically okay I hope that answer to your initial questions yeah if you have any followup questions uh I mean users on on um on YouTube just add them and we'll try to address later thank you Fred thank you mat for presentation it was great yeah the next one if I'm not mistaken is John Jerome right yeah hi hello cool uh thanks thanks for um and hello everyone thanks uh thanks for following us today I'm going to cover um updates um that we call Community updates so these are activities and um details that um came together over the past three months um and I want to credit my colleague Denise who I usually present this with um who can't join us for this presentation but he put um most of it together so thank you Denise for that uh first of all we wanted to share some of our current stats as of a couple of days ago um and it's looking it's looking pretty nice thanks to all of our users and customers who are making this happen and to our team of course but we're now well at well over 700 700 million downloads 720 even um and so we're looking forward to a bigger number before the end of the year um but this is pretty fantastic um thanks to everyone who's making this possible and you can see some of the other stats that we follow and that we love to see such as um everyone who's joining us in our slap Channel we have a public slap Channel that's really quite active both for metrics and for logs um the number of issues that we deal with the number of pro requests of participants also importantly um so thanks to all of you and we're looking forward to the Q full update um so stay tuned for that we had um I think as usual a lot of releases uh even though there's a bit of a shift so as you can see we had a lot of Victor logs releases which is fantastic news and um as Alex explained earlier we're expecting um some more and some big news still for on victorial logs before the end of the year um and uh but continue also with the Victor metric of course and we uh looked at the latest updates there earlier as well as the road map details um so both for metrix and for the LTS releases um for Theory metrics um so thank you to everyone the team and all of our contributors uh in the community who are helping with that um the other big number that we wanted to share that we're really proud of and happy with is um uh in our GitHub repository that we over 12,000 Stars so thanks to everyone who clicked that button since we started with Victoria metrix and now also Victoria logs um it's a great number to see it um we just love the the feedback um so uh if you're happy we're happy uh so keep the Stars coming thank you uh right and this is probably the biggest piece of news from the on the community side of things that we want to share um it's a project um that the is running so I'm speaking on his behalf um and um it's our new community organization uh on GitHub which we invite you to follow and to participate in um so this is where we want to help any users who have um um a project know that um runs or works with thetics if um we will support you to have it there uh in this Comm Community organization um Denise is also working on um on more of a web presence for the Victor metrics community so we will have news on that over the next few months as well um but this is a really cool project and um uh you most of you will probably know Denise from our slap channel so feel free to reach him there if you have questions on this uh and also of course do go and check it out um we'd love the participations and the feedback on it right um something that's been new in the sense of um in the sense of content uh that we produce in the past three months is that um um our new colleague Fong has started and kept maintaining on a weekly basis um the blog post series on go um so you probably all know that viatrix is written in go um that um all of us or most of us are huge go fans if not um developers um and um so we like to say with go first U so it's a fantastic blog series it's been weekly um it's all to do with go um how to use go how to um optimize with go it uses viatic also at times um as um know background to tell these stories um so if you haven't seen this blog series yet um I invite you to go check it out go check okay go check it out um we have a there's a direct link to all of the posts go atric and then you also have the individual ones um it's it's just a really great um um um series that uh F has kicked off here we're getting lots of great feedback on it and um so if you're if you're a go developer if you're interested in go um I encourage you to check this out uh and of course we are also um we are also publishing um content and thought pieces on know observability on monitoring itself on uh kind of um our core activities um one blog posts uh blog post that has generated quite a bit of interest and lots of questions discussions uh recently is a post we did with Alex on the rise of Open Source data time series databases um if you haven't read it yet please no do um it's a it's a really nice thought piece by Alex um and um it goes with the other blog post listed here on software licenses and open source versus Revenue growth rates they kind of connected um so I invite you to read both of those and then um our new colleague Jun is um has also started a blog post series um specifically on um or questions or situations that arise that our um users um put to us um as they as they work or use with VI metrics and so you have the blog here Community questions and also troubleshooting time Ser the basis that's by Jun and um you can expect more um of those types of posts um as well uh then we've been in the news so we did a big um big well we did an announcement um 3 months ago roughly um or two months ago on the new release of Victoria metrix cloud which we heard updates um of earlier um that got us nice pieces of coverage um especially in the register and nice piece there that also generated questions and discussions um we also do so we talk to the journalists um regularly which is very nice and we're quite thankful for that and also to different analysts uh one of them being James Governor from Red monk um whom whom we've um shared a couple of tweets here that he posted after we spoke to him um so this helps us you know get feedback from um um from experts who work in the industry and who talk to other organizations such as us and it also helps us spread the news um so we're always thankful for that type of um press coverage and discussions um and um we're pretty quite busy for the end of this year uh in ter terms of talks and it's been really quite nice this year anyway we've done quite a few events and conferences and online talks and so on you can see the ones that are coming up um next um for instance um we'll be in Paris next week um at one of our um users headquarters CR um based in Paris so Alex will be presenting that there on the latest um features from Victoria logs U we'll also talk at um Maas who spoke earlier will be speaking at com 42 you can see the list of conferences here a nice one also is uh it's the first uh in of its kind the open source observability conference it's an online conference you can sign up for that on the 24th of October um I think the biggest one in the list here is probably cubec Con in Salt Lake City so if you're planning to attend um we're looking forward to seeing you there and uh we put the links here to the other conferences and talks we're going to do um some of them will be talking about logs some of them will talk about VI metrics and so on so it's a nice um that' be a nice mix of talks and of course opportunity to meet you so if you're in any of those areas um and you want to meet do let us know uh in our slack channel for example um then there are also lots of activities um which we're really thankful for um from from within the community uh for instance today um there's a group women in Cloud native who come together regularly and they had a talk and um an event today where they talked about enhancing Cloud native observability with open Telemetry and Victoria metric which we were really happy to see and we'd love to do um or to work more with that particular group also uh our colleague Haley um posted something on how to set up um ha Victoria metrix um there's a team at zato who um wrote about how and why migrated to Victor metrix um um sufan bushara he wrote about Victor metrix as well and um zenko he writes quite regularly about both Vics and deog so thanks to him for doing that uh and for those um um community activities uh with that uh thanks to you all um there's a link here to our community page and you can find all of the different areas that you can reach us um um through uh and look forward to talking to you at one of the upcoming events uh or our slack Channel or any other channel that you use to contact us yeah thanks for talk JJ and for slides and for collecting all this stuff together I I would also add that during the summer we also had a plenty of events to participate and talks recorded on YouTube and also articles so yeah I I encourage everyone who interested in what metrics does the topics we discuss which are always technical uh yeah just Google our name in for sure yeah so um I guess uh that's all um we have the last section is Q&A and uh do we have any questions to answer okay someone disconnected everyone are are you there uh you're muted probably yeah I was who disconnected everyone uh currently I have some additional noise so we'll be muted and cannot on stage [Music] every yeah uh sorry so um AR as a a parent have responsibilities in the evening uh for me it is the simar I also have a kid uh who will soon will go sleep and it will be a lot of noise because yeah um since we don't have uh questions in the SL channel to discuss right now uh I I don't think we need to do ma we already uh spend two hours of on the meet up I think that's enough for users it's just hard to sit for two hours to listen not only to talk I I want to thank once again for everyone who contributed to this meet up for all our guests who presented uh today it's uh always nice to see how other people see the the products that you use and also very good to hear the feedback it doesn't matter good feedback or negative feedback uh really appreciate that if you can like create a an issue write an article or something like that that's always helps to uh grow popularity of the product grow community and also to exchange opinions thoughts ideas this is what I I believe the open source Community is for okay so thank you very much again and we uh please find us on any of the events that JJ mentioned for example next cucon or some European events we also plan to participate in the European cuor uh in the next spring in London so yeah you have plenty of chances to uh find someone from Victor metrix team and talk about GitHub issues that were not closed yet which is always happen when some someone finds us on the conference okay anyway thanks again uh everyone and I hope you have a good rest of the day and thanks for coming see you on the next meet up