Jon Allman | Puget Systems https://www.pugetsystems.com/bios/jonallman/ Workstations for creators. Thu, 22 Aug 2024 16:41:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.pugetsystems.com/wp-content/uploads/2022/08/Puget-Systems-2020-logomark-color-500-48x48.png Jon Allman | Puget Systems https://www.pugetsystems.com/bios/jonallman/ 32 32 LLM Inference – Professional GPU performance https://www.pugetsystems.com/labs/articles/llm-inference-professional-gpu-performance/ https://www.pugetsystems.com/labs/articles/llm-inference-professional-gpu-performance/#respond Thu, 22 Aug 2024 16:41:27 +0000 https://www.pugetsystems.com/?post_type=article&p=29660 How do a selection of GPUs from NVIDIA's professional lineup compare to each other in the llama.cpp benchmark?

The post LLM Inference – Professional GPU performance appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/articles/llm-inference-professional-gpu-performance/feed/ 0
LLM Inference – Consumer GPU performance https://www.pugetsystems.com/labs/articles/llm-inference-consumer-gpu-performance/ https://www.pugetsystems.com/labs/articles/llm-inference-consumer-gpu-performance/#respond Thu, 22 Aug 2024 16:41:26 +0000 https://www.pugetsystems.com/?post_type=article&p=30017 How do a selection of GPUs from NVIDIA's GeForce series compare to each other in the llama.cpp benchmark?

The post LLM Inference – Consumer GPU performance appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/articles/llm-inference-consumer-gpu-performance/feed/ 0
Tech Primer: What hardware do you need to run a local LLM? https://www.pugetsystems.com/labs/articles/tech-primer-what-hardware-do-you-need-to-run-a-local-llm/ https://www.pugetsystems.com/labs/articles/tech-primer-what-hardware-do-you-need-to-run-a-local-llm/#respond Mon, 12 Aug 2024 21:34:43 +0000 https://www.pugetsystems.com/?post_type=article&p=29116 What considerations need to be made when starting off running LLMs locally?

The post Tech Primer: What hardware do you need to run a local LLM? appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/articles/tech-primer-what-hardware-do-you-need-to-run-a-local-llm/feed/ 0
Effects of CPU speed on GPU inference in llama.cpp https://www.pugetsystems.com/labs/articles/effects-of-cpu-speed-on-gpu-inference-in-llama-cpp/ https://www.pugetsystems.com/labs/articles/effects-of-cpu-speed-on-gpu-inference-in-llama-cpp/#respond Mon, 01 Jul 2024 17:20:22 +0000 https://www.pugetsystems.com/?post_type=article&p=28690 What effect, if any, does a system's CPU speed have on GPU inference with CUDA in llama.cpp?

The post Effects of CPU speed on GPU inference in llama.cpp appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/articles/effects-of-cpu-speed-on-gpu-inference-in-llama-cpp/feed/ 0
Puget Mobile 17″ vs M3 Max MacBook Pro 16″ for AI Workflows https://www.pugetsystems.com/labs/articles/puget-mobile-17-vs-m3-max-macbook-pro-16-for-ai-workflows/ https://www.pugetsystems.com/labs/articles/puget-mobile-17-vs-m3-max-macbook-pro-16-for-ai-workflows/#respond Tue, 28 May 2024 19:17:34 +0000 https://www.pugetsystems.com/?post_type=article&p=27770 How does the new Puget Mobile 17" compare to the MacBook Pro M3 Max 16" in performance across a variety of AI-powered workloads?

The post Puget Mobile 17″ vs M3 Max MacBook Pro 16″ for AI Workflows appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/articles/puget-mobile-17-vs-m3-max-macbook-pro-16-for-ai-workflows/feed/ 0
Local alternatives to Cloud AI services https://www.pugetsystems.com/labs/hpc/local-alternatives-to-cloud-ai-services/ https://www.pugetsystems.com/labs/hpc/local-alternatives-to-cloud-ai-services/#respond Thu, 11 Apr 2024 20:07:33 +0000 https://www.pugetsystems.com/?post_type=hpc_post&p=26768 Presenting local AI-powered software options for tasks such as image & text generation, automatic speech recognition, and frame interpolation.

The post Local alternatives to Cloud AI services appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/hpc/local-alternatives-to-cloud-ai-services/feed/ 0
Stable Diffusion Linux vs. Windows https://www.pugetsystems.com/labs/articles/stable-diffusion-linux-vs-windows/ https://www.pugetsystems.com/labs/articles/stable-diffusion-linux-vs-windows/#respond Mon, 01 Apr 2024 17:18:38 +0000 https://www.pugetsystems.com/?post_type=article&p=26602 How does the choice of Operating System affect image generation performance in Stable Diffusion?

The post Stable Diffusion Linux vs. Windows appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/articles/stable-diffusion-linux-vs-windows/feed/ 0
Benchmarking with TensorRT-LLM https://www.pugetsystems.com/labs/hpc/benchmarking-with-tensorrt-llm/ https://www.pugetsystems.com/labs/hpc/benchmarking-with-tensorrt-llm/#respond Fri, 16 Feb 2024 18:06:51 +0000 https://www.pugetsystems.com/?post_type=hpc_post&p=23187 Evaluating the speed of GeForce RTX 40-Series GPUs using NVIDIA's TensorRT-LLM tool for benchmarking GPU inference performance.

The post Benchmarking with TensorRT-LLM appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/hpc/benchmarking-with-tensorrt-llm/feed/ 0
Experiences with Multi-GPU Stable Diffusion Training https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/ https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/#respond Mon, 29 Jan 2024 23:22:19 +0000 https://www.pugetsystems.com/?post_type=hpc_post&p=22714 Results and thoughts with regard to testing a variety of Stable Diffusion training methods using multiple GPUs.

The post Experiences with Multi-GPU Stable Diffusion Training appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/hpc/multi-gpu-sd-training/feed/ 0
Stable Diffusion LoRA Training – Consumer GPU Analysis https://www.pugetsystems.com/labs/articles/stable-diffusion-lora-training-consumer-gpu-analysis/ https://www.pugetsystems.com/labs/articles/stable-diffusion-lora-training-consumer-gpu-analysis/#respond Wed, 20 Dec 2023 15:00:00 +0000 https://www.pugetsystems.com/?post_type=article&p=21971 How does performance compare across a variety of consumer-grade GPUs in regard to SDXL LoRA training?

The post Stable Diffusion LoRA Training – Consumer GPU Analysis appeared first on Puget Systems.

]]>
https://www.pugetsystems.com/labs/articles/stable-diffusion-lora-training-consumer-gpu-analysis/feed/ 0