Logging
Logging at Pulze.ai
Introduction
Observability is key to the smooth functioning and optimization of machine learning models, especially when managing large language models (LLMs) in modern machine learning applications. As part of the LLMOps stack, Pulze.ai plays an essential role in enhancing observability by providing robust logging features.
Every request, successful or failed, is logged and can be retrieved for inspection. This detailed logging service allows you to have a comprehensive understanding of the interactions with your models. The logs are made available for your labeling team or annotators to sift through, rate them, and even provide feedback.
This feedback is vital as it is leveraged to fine-tune your application further. The process of rating each log entry and providing feedback is emphasized at the beginning and throughout the Pulze.ai service.
Moreover, we are excited to announce that an SDK will soon be available, enabling you to manage your logs programmatically for a streamlined experience.
Pulze.ai’s Logging
Pulze.ai logs provide invaluable insights into the interactions between your users and your applications. They record crucial information such as timestamp
, model used
, latency
, cost
, prompt
, and response
for every request, regardless of whether it was successful or not.
This provides you with a rich dataset that can be used to analyze the performance of your models and identify any potential issues. With the added feature of logging failed requests, you can also use this information to improve the resilience and reliability of your models.
Accessing Your Logs on Pulze.ai Platform
Accessing your logs on the Pulze.ai platform is straightforward. Simply log into the Pulze.ai platform and navigate to the ‘Logs’ section from the left-hand menu.
In the ‘Logs’ section, you can view and filter your logs based on various criteria. You can search for specific prompts and responses, filter based on labels, or even filter by specific apps and their corresponding API Keys.
Log Analysis
To view more in-depth information about a specific request and its response, click on the log row. You will see detailed parameters and values of the request, along with a comprehensive view of the response data. This depth of information allows you to precisely analyze the interactions with your machine learning models.
Additionally, Pulze.ai logs can be a powerful tool for your labeling teams or annotators. They can sift through the logs, rate each log entry, and provide feedback. This feedback can then be used to further fine-tune your machine learning models, enabling you to improve the performance and reliability of your applications.
Using Labels in Logs
Even though labels are not the primary focus of this ‘Logging’ section, they are integral in managing and examining your logs. Labels can be assigned to any request, which simplifies the process of filtering and pinpointing specific requests in your logs. They can also be utilized to track feature usage and identify potential problems, thereby enhancing your capability to manage and optimize your machine learning models.
For more information about labels, including detailed examples of how they can be used, please refer to our additional documentation on labeling at https://docs.pulze.ai/features/labeling.
Coming Soon: SDK for Logging
We understand that programmatically managing your logs can provide a more efficient and streamlined experience. That’s why we’re working on developing an SDK that will enable you to do just that. With this SDK, you’ll be able to retrieve and analyze your logs programmatically, providing even greater control and flexibility over your data.
Conclusion
As a part of the LLMOps stack, Pulze.ai is committed to helping you achieve optimal performance and reliability in your machine learning applications. By providing detailed logging services and planning to launch the SDK for logging, we’re working hard to improve your experience and give you the tools you need to succeed. We look forward to continuing to support your efforts and are always here to help with any questions or concerns you may have.
Was this page helpful?