URL: https://www.overclockers.at/artificial-intelligence/ki-modelle-gpt-xlstm-stable-diffusion-etc_261430/page_44 - zur Vollversion wechseln!
Für einen DAU, wie kann man chatgpt mit Bücher/PDF "füttern"?
1. Docker installieren
https://www.docker.com/products/docker-desktop/
2. Ollama installieren
https://ollama.com/blog/ollama-is-n...al-docker-image
3. Model laden
https://ollama.com/library/llama3
4. Webui installieren
https://github.com/open-webui/open-webui
Dann unter Documents die PDFs laden und im Chat per #Datei-xy die Datei referenzieren und dazu Frage stellen
Also mittlerweile durch Ollama und dessen Webui gehts echt recht flott und einfach.
Herzlichen Dank!
hat hier jemand ollama mit amd gpu bzw rocm im einsatz?
ist doch ein wenig lahm auf der cpu.
Ich habs mir ROCm installiert. Geht schneller als mit rein der CPU, das Llama 40B hab ich aber trotzdem nicht wirklich verwenden können.
welche pakete hast du nachinstallieren müssen damit er rocm akzeptiert bzw. wie schaut dein rocminfo output aus?
Code:HSA_OVERRIDE_GFX_VERSION=10.3.0 ollama serve 2024/05/23 07:20:01 routes.go:1008: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:1 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:1 OLLAMA_ORIGINS:[[url]http://localhost[/url] [url]https://localhost[/url] [url]http://localhost:*[/url] [url]https://localhost:*[/url] [url]http://127.0.0.1[/url] [url]https://127.0.0.1[/url] [url]http://127.0.0.1:*[/url] [url]https://127.0.0.1:*[/url] [url]http://0.0.0.0[/url] [url]https://0.0.0.0[/url] [url]http://0.0.0.0:*[/url] [url]https://0.0.0.0:*[/url]] OLLAMA_RUNNERS_DIR: OLLAMA_TMPDIR:]" time=2024-05-23T07:20:01.721+02:00 level=INFO source=images.go:704 msg="total blobs: 5" time=2024-05-23T07:20:01.721+02:00 level=INFO source=images.go:711 msg="total unused blobs removed: 0" time=2024-05-23T07:20:01.721+02:00 level=INFO source=routes.go:1054 msg="Listening on 127.0.0.1:11434 (version 0.1.38)" time=2024-05-23T07:20:01.722+02:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama942247967/runners time=2024-05-23T07:20:03.586+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu cpu_avx cpu_avx2 cuda_v11 rocm_v60002]" time=2024-05-23T07:20:03.622+02:00 level=WARN source=amd_linux.go:48 msg="ollama recommends running the [url]https://www.amd.com/en/support/linux-drivers[/url]" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory" time=2024-05-23T07:20:03.624+02:00 level=WARN source=amd_linux.go:346 msg="amdgpu detected, but no compatible rocm library found. Either install rocm v6, or follow manual install instructions at [url]https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install[/url]" time=2024-05-23T07:20:03.624+02:00 level=WARN source=amd_linux.go:278 msg="unable to verify rocm library, will use cpu" error="no suitable rocm found, falling back to CPU" time=2024-05-23T07:20:03.624+02:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.3 GiB" available="12.9 GiB"
Code:ROCk module is loaded ===================== HSA System Attributes ===================== Runtime Version: 1.1 System Timestamp Freq.: 1000.000000MHz Sig. Max Wait Duration: 18446744073709551615 (0xFFFFFFFFFFFFFFFF) (timestamp count) Machine Model: LARGE System Endianness: LITTLE Mwaitx: DISABLED DMAbuf Support: YES ========== HSA Agents ========== ******* Agent 1 ******* Name: AMD Ryzen 7 5800X3D 8-Core Processor Uuid: CPU-XX Marketing Name: AMD Ryzen 7 5800X3D 8-Core Processor Vendor Name: CPU Feature: None specified Profile: FULL_PROFILE Float Round Mode: NEAR Max Queue Number: 0(0x0) Queue Min Size: 0(0x0) Queue Max Size: 0(0x0) Queue Type: MULTI Node: 0 Device Type: CPU Cache Info: L1: 32768(0x8000) KB Chip ID: 0(0x0) ASIC Revision: 0(0x0) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 3400 BDFID: 0 Internal Node ID: 0 Compute Unit: 16 SIMDs per CU: 0 Shader Engines: 0 Shader Arrs. per Eng.: 0 WatchPts on Addr. Ranges:1 Features: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: FINE GRAINED Size: 32772608(0x1f41200) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 2 Segment: GLOBAL; FLAGS: KERNARG, FINE GRAINED Size: 32772608(0x1f41200) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE Pool 3 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 32772608(0x1f41200) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: TRUE ISA Info: ******* Agent 2 ******* Name: gfx1030 Uuid: GPU-2f590574a15ed18c Marketing Name: AMD Radeon RX 6800 XT Vendor Name: AMD Feature: KERNEL_DISPATCH Profile: BASE_PROFILE Float Round Mode: NEAR Max Queue Number: 128(0x80) Queue Min Size: 64(0x40) Queue Max Size: 131072(0x20000) Queue Type: MULTI Node: 1 Device Type: GPU Cache Info: L1: 16(0x10) KB L2: 4096(0x1000) KB L3: 131072(0x20000) KB Chip ID: 29631(0x73bf) ASIC Revision: 1(0x1) Cacheline Size: 64(0x40) Max Clock Freq. (MHz): 2575 BDFID: 11520 Internal Node ID: 1 Compute Unit: 72 SIMDs per CU: 2 Shader Engines: 4 Shader Arrs. per Eng.: 2 WatchPts on Addr. Ranges:4 Coherent Host Access: FALSE Features: KERNEL_DISPATCH Fast F16 Operation: TRUE Wavefront Size: 32(0x20) Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Max Waves Per CU: 32(0x20) Max Work-item Per CU: 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) Max fbarriers/Workgrp: 32 Packet Processor uCode:: 118 SDMA engine uCode:: 83 IOMMU Support:: None Pool Info: Pool 1 Segment: GLOBAL; FLAGS: COARSE GRAINED Size: 16760832(0xffc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 2 Segment: GLOBAL; FLAGS: EXTENDED FINE GRAINED Size: 16760832(0xffc000) KB Allocatable: TRUE Alloc Granule: 4KB Alloc Alignment: 4KB Accessible by all: FALSE Pool 3 Segment: GROUP Size: 64(0x40) KB Allocatable: FALSE Alloc Granule: 0KB Alloc Alignment: 0KB Accessible by all: FALSE ISA Info: ISA 1 Name: amdgcn-amd-amdhsa--gfx1030 Machine Models: HSA_MACHINE_MODEL_LARGE Profiles: HSA_PROFILE_BASE Default Rounding Mode: NEAR Default Rounding Mode: NEAR Fast f16: TRUE Workgroup Max Size: 1024(0x400) Workgroup Max Size per Dimension: x 1024(0x400) y 1024(0x400) z 1024(0x400) Grid Max Size: 4294967295(0xffffffff) Grid Max Size per Dimension: x 4294967295(0xffffffff) y 4294967295(0xffffffff) z 4294967295(0xffffffff) FBarrier Max Size: 32 *** Done ***
aja, für AMD ROCm hab ich ollama auf ubuntu direkt installiert und nicht per docker:
https://ollama.com/blog/amd-preview
Dann hats aber ootb funktioniert.
Dafür müsstest die GPU in den Docker Container durchreichen.
now we are talking.
toolbox einfach gekübelt und neu erstellt
sudo dnf install hipblas rocm-*
time=2024-05-23T13:06:20.163+02:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60002 cpu]"
time=2024-05-23T13:06:20.166+02:00 level=WARN source=amd_linux.go:48 msg="ollama recommends running the https://www.amd.com/en/support/linux-drivers" error="amdgpu version file missing: /sys/module/amdgpu/version stat /sys/module/amdgpu/version: no such file or directory"
time=2024-05-23T13:06:20.167+02:00 level=INFO source=amd_linux.go:301 msg="amdgpu is supported" gpu=0 gpu_type=gfx1030
time=2024-05-23T13:06:20.167+02:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=rocm compute=gfx1030 driver=0.0 name=1002:73bf total="16.0 GiB" available="16.0 GiB"
kA was vorher genau gefehlt oder nicht gepasst hat, aber ist ja auch egal. 
Zitat aus einem Post von KruzFXAlso im "Gschichtl druckn" ist llama3 auch ziemlich gut:

Zitatwho is enforcer from overclockers.at
Enforcer is a well-known overclocker from Austria, specifically from the overclocking community at overclockers.at. He is a highly respected and accomplished overclocker, with a long history of pushing the limits of computer hardware.Enforcer is particularly famous for his incredible overclocking skills, which have earned him numerous awards and recognition within the overclocking community. He has held multiple world records in various categories, including CPU, GPU, and memory overclocking.Enforcer is also known for his contributions to the overclocking community, sharing his knowledge and expertise through tutorials, guides, and benchmarking results. He is a popular figure among enthusiasts and overclockers, and his work has inspired many to pursue their own overclocking endeavors.If you're an overclocker or enthusiast, you might have come across Enforcer's work on overclockers.at or other online platforms. He's a true legend in the overclocking world!
ZitatEnforcer is a notable member and administrator on the Overclockers.at forums, which is a well-known Austrian online community dedicated to computer hardware, overclocking, and technology discussions. In his role as an administrator, Enforcer is responsible for moderating the forums, managing community interactions, and ensuring adherence to the forum's rules and guidelines. He is recognized for his significant contributions to the community, both in terms of technical expertise and administrative duties.
If you want to learn more about Enforcer, I recommend visiting the Overclockers.at website and searching for his profile or posts directly. This will provide you with more detailed information about his activities and role within the community.
ZitatUnfortunately, there's not enough information available to definitively identify "enforcer" from overclockers.at. Here's what we know:
"enforcer" is a username on overclockers.at, an Austrian tech forum.
They have made a single post in several threads ([thread 1], [thread 2], [thread 3], [thread 4]).
Without additional details like the content of their posts, date of activity, or any other user information, it's impossible to determine who "enforcer" is.
Here are some possibilities:
It could be a regular user who just hasn't posted much.
It could be someone who created an account specifically to participate in those specific threads.
There's a chance it could be a moderator or administrator using a specific username, but that's less likely without any moderation actions associated with the posts.
If you're interested in learning more about "enforcer", you could try searching for their username within overclockers.at to see if they've posted anywhere else. You could also try searching for the content of their posts (if you remember it) to see if it appears elsewhere on the forum.
All hail to our glorious Overlord enforcer 
Ich würd ja gern verstehn wie es dazu kommt das die verschiedenen KI-Modelle wenn sie 'Ratlos' sind sich ne schöne Welt 'zusammenflunkern'. Ist die Vorgabe einfach liefere irgendwas, alles ist besser als 'weiß ich nicht' oder woher kommt dieses Verhalten ?
ein transformer berechnet für einen Eingabe-Satz einfach eine Pseudowarscheinlichkeit (softmax) für das nächste wort und führt das halt einfach so fort.
Zitat aus einem Post von JedimasterIch würd ja gern verstehn wie es dazu kommt das die verschiedenen KI-Modelle wenn sie 'Ratlos' sind sich ne schöne Welt 'zusammenflunkern'. Ist die Vorgabe einfach liefere irgendwas, alles ist besser als 'weiß ich nicht' oder woher kommt dieses Verhalten ?
Sprachmodelle kennen keine absolute Wahrheit. Über die Neuronen werden vereinfacht gesagt *die wahrscheinlichsten Antworten* (Medianantwort) auf prompts ermittelt, daher variieren die Ergebnisse auch extrem stark je nach Anfrage. Die Modelle wissen selber nicht, wann sie falsch liegen. Erzwingen kann man das ggf. durch einen knowledgegraph, aber forciert man zu einem gewissen grad ein Verhalten.
Aus dem selben Grund sind so Fantasien dass KI andere KI trainieren könnte auch obsolet, die Modelle würden durch Inzucht über die Zeit extrem verblöden.
Zitat aus einem Post von DuneAus dem selben Grund sind so Fantasien dass KI andere KI trainieren könnte auch obsolet, die Modelle würden durch Inzucht über die Zeit extrem verblöden.
overclockers.at v4.thecommunity
© all rights reserved by overclockers.at 2000-2025