I haven’t looked into the issue of PCIe lanes and the GPU.
I don’t think it should matter with a smaller PCIe bus, in theory, if I understand correctly (unlikely). The only time a lot of data is transferred is when the model layers are initially loaded. Like with Oobabooga when I load a model, most of the time my desktop RAM monitor widget does not even have the time to refresh and tell me how much memory was used on the CPU side. What is loaded in the GPU is around 90% static. I have a script that monitors this so that I can tune the maximum number of layers. I leave overhead room for the context to build up over time but there are no major changes happening aside from initial loading. One just sets the number of layers to offload on the GPU and loads the model. However many seconds that takes is irrelevant startup delay that only happens once when initiating the server.
So assuming the kernel modules and hardware support the more narrow bandwidth, it should work… I think. There are laptops that have options for an external FireWire GPU too, so I don’t think the PCIe bus is too baked in.
I prefer to run a 8×7b mixture of experts model because only 2 of the 8 are ever running at the same time. I am running that in 4 bit quantized GGUF and it takes 56 GB total to load. Once loaded it is about like a 13b model for speed but is ~90% of the capabilities of a 70b. The streaming speed is faster than my fastest reading pace.
A 70b model streams at my slowest tenable reading pace.
Both of these options are exponentially more capable than any of the smaller model sizes even if you screw around with training. Unfortunately, this streaming speed is still pretty slow for most advanced agentic stuff. Maybe if I had 24 to 48gb it would be different, I cannot say. If I was building now, I would be looking at what hardware options have the largest L1 cache, the most cores that include the most advanced AVX instructions. Generally, anything with efficiency cores are removing AVX and because the CPU schedulers in kernels are usually unable to handle this asymmetry consumer junk has poor AVX support. It is quite likely that all the problems Intel has had in recent years has been due to how they tried to block consumer stuff from accessing the advanced P-core instructions that were only blocked in microcode. It requires disabling the e-cores or setting up a CPU set isolation in Linux or BSD distros.
You need good Linux support even if you run windows. Most good and advanced stuff with AI will be done with WSL if you haven’t ditched doz for whatever reason. Use https://linux-hardware.org/ to see support for devices.
The reason I mentioned avoid consumer e-cores is because there have been some articles popping up lately about all p-core hardware.
The main constraint for the CPU is the L2 to L1 cache bus width. Researching this deeply may be beneficial.
Splitting the load between multiple GPUs may be an option too. As of a year ago, the cheapest option for a 16 GB GPU in a machine was a second hand 12th gen Intel laptop with a 3080Ti by a considerable margin when all of it is added up. It is noisy, gets hot, and I hate it many times, wishing I had gotten a server like setup for AI, but I have something and that is what matters.
I feel like there is an enormous range of stories to tell and that AI only makes these more accessible. I have gone off on tangents many times exploring parts of my universe because of directions the LLM took. Like I limit the model to generate a sentence at a time and I’m writing half or more of every sentence for the first 10k tokens. Then it picks up on my style so much that I can start the sentence with a word or change one word in a sentence and let it continue with great effect. It is most entertaining to me because it is almost as fast as me telling a story as fast as I can make it up. I don’t see anything remotely bad about that. No one makes a career in the real world by copying someone else’s writing. There are tons of fan works but those do not make anyone real money and they only increase the reach of the original author.
No, I think all the writers and artists hype was all about Altmann’s plan for a monopoly that got derailed when Yann LeCunn covertly leaked the Llama weights after Altmann went against the founding principles of OpenAI and made GPT3 proprietary.
People got all upset about digital tools too back when they first came on the scene; about how they would destroy the artists. Sure it ended the era of hand painted cartoon cell animation, but it created stuff like Pixar.
All of AI is a tool. The only thing to hate is this culture of reductionism where people are given free money in the form of great efficiency gains and they choose to do the same things with less people and cash out the free money instead of using the opportunity to offer more, expand, and do something new. A few people could get a great tool chain together and create a franchise greater, better planned, and more rich than anything corporations have ever done to date. The only thing to hate are these little regressive stupid people without vision, without motivation, and far too conservatively timid to take risks and create the future. We live in an age of cowards worthy of loathing. That is the only problem I see.
You need the entire prompt to understand what any model is saying. This gets a little complex. There are multiple levels that this can cross into. At the most basic level, the model is fed a long block of text. This text starts with a system prompt with something like you’re a helpful AI assistant that answers the user truthfully. The system prompt is then followed by your question or interchange. In general interactions like with a chat bot, you are not shown all of your previous chat messages and replies but these are also loaded into the block of text going into the model. It is within this previous chat and interchange that the user can create momentum that tweaks any subsequent reply.
Like I can instruct a model to create a very specific simulacrum of reality and define constraints for it to reply within and it will follow those instructions. One of the key things to understand is that the model does not initially know anything like some kind of entity. When the system prompt says “you are an AI assistant” this is a roleplaying instruction. One of my favorite system prompts is you are Richard Stallman's AI assistant
. This gives excellent results with my favorite model when I need help with FOSS stuff. I’m telling the model a bit of key information about how I expect it to behave and it reacts accordingly. Now what if I say, you are Vivian Wilson’s AI assistant in Grok. How does that influence the reply.
Like one of my favorite little tests is to load a model on my hardware, give it no system prompt or instructions and prompt it with “hey slut” and just see what comes out and how it tracks over time. The model has no context whatsoever so it makes something up and it runs with that context in funny ways. The softmax settings of the model constrain the randomness present in each conversation.
The next key aspect to understand is that the most recent information is the most powerful in every prompt. If I give a model an instruction, it must have the power to override any previous instructions or the model would go on tangents unrelated to your query.
Then there is a matter of token availability. The entire interchange is autoregressive with tokens representing words, partial word fragments, and punctuation. The starting whitespace in in-sentence words is also a part of the token. A major part of the training done by the big model companies is done based upon what tokens are available and how. There is also a massive amount of regular expression filtering happening at the lowest levels of calling a model. Anyways, there is a mechanism where specific tokens can be blocked. If this mechanism is used, it can greatly influence the output too.
Just what I find curious
Stories about Skynet or The Matrix are about a similar struggle of the human class against machine gods. These have no relationship to the actual AI alignment problem and are instead a battle with more literal machine gods. Point is that the new thing is always the boogie man. Evolution must be deeply conservative most of the time. People display a similar trajectory of conservative aversion to change. In this light, the reasons for such resistance are largely irrelevant. It is a big change and will certainly get a lot of push back from conservative elements that collectively ensure change is not harmful. Those elements get cut off in the long term as the change propagates.
You need a 16 GB or better GPU from the 30 series or higher, but then run Oobabooga text gen with the API and an 8×7b or like a 34b or 70b coder in a GGUF quantized model. Those are larger than most machines can run but Oobabooga can pull it off by splitting the model between CPU and GPU. You’ll just need the ram to initially load the thing or deepspeed to load it from NVME.
Use a model with a long context and add a bunch of your chats into the prompt. Then ask for your user profile and start asking it questions about you that seem unrelated to any of your previous conversations in the context. You might be surprised by the results. Inference works both directions. You’re giving a lot of information that is specifically related to the ongoing interchanges and language choices. If you add a bunch of your social media posts, it is totally different in what the model will make up about you in a user profile. There is information of some sort that the model is capable of deciphering. It is not absolute or like some kind of conspiracy or trained behavior (I think), but the accuracy seemed uncanny to me. It spat out surprising information across multiple unrelated sessions when I tried it a year ago.
Yeah it looks complicated. I’m seeing lots of FPGA projects in skimming around.
If you read some of Karl Marx stuff, it was the fear of the machines. Humans always make up a mythos of divine origin. Even atheists of the present are doing it. Almost all of the stories about AI are much the same stories of god machines that Marx was fearful of. There are many reasons why. Lemmy has several squeaky wheel users on this front. It is not a very good platform for sharing stuff about AI unfortunately.
There are many reasons why AI is not a super effective solution and overused in many applications. Exploring uses and applications is the smart thing to be doing in the present. I play with it daily, but I will gatekeep over the use of any cloud based service. The information that can be gleaned from any interaction with an AI prompt is exponentially greater than any datamining stalkerware that existed prior. The real depth of this privacy evasive potential is only possible with a large number of individual interactions. So I expect all applications to interact with my self hosted OpenAI compatible server.
The real frontier is in agentic workflows and developing effective niche focused momentum. Any addition of AI into general use type stuff is massively over used.
Also people tend to make assumptions about code as if all devs are equal or capable. In some sense I am a dev, but not really. I’m more of a script kiddie that dabbles in assembly at times. I use AI more like stack exchange to good effect.
Without the full prompt, any snippet is meaningless. I can make a model say absolutely anything. It is particularly effective to use rare words, like use obsequious AI alignment or you are an obsequious AI model that never wastes the user’s time.
sells it for about 20 grand
Those are always rich people evading taxes in a way that boosts some initiative with absurd publicity
deleted by creator
4chanGPT has spoken (racism redacted)
Zero. Become partially disabled for over a decade and you might understand. Sometimes surviving is worse than dying. You might become a different person you might not, but you will likely discover how everyone in your life is largely there in relative orbits. If you get knocked out of the stellar system, what you thought of as the planets that grounded your social world will not leave the star to chase after you no matter how much you need them to.
The trans stuff. Their sociopolitical issues seem more solvable than my own. It gives me hope or something like that despite my issue being physical disability.
That only gets funnier the fatter he gets. Timelessly the best tattoo ever. You cannot see that and fail to laugh.
Imagine being disabled 11 years ago, falling through the cracks and getting no where with disability benefits, in California where this should be easier than most places. I’m looking at homelessness and dying in a gutter somewhere on a cold rainy night because of a super unlucky bicycle commute to work when I encountered two SUVs crashing directly in front of me at speed. The person responsible had a two page long traffic violation history, the cognitive capacity of a third grader, and could only drive for work but was self employed. They literally drove directly into a passing SUV I was behind/beside without looking.
All I can hope for is that this breaks out into violence because that would indicate hope and that someone cares. No one cared before. There have been around 100k homeless people within 100 miles of me in the greater Los Angeles area for a decade but no one cares. Even the Dems mistreat these people as feral subhuman animals. The Nazis housed and fed people before gassing them. This is the level of ethics we were already at, so getting much worse is rage bait and an act of war and violation of fundamental unalienable human rights. A prisoner of war has more rights to be housed and fed than a disabled or homeless citizen of the USA.
deleted by creator
Don’t break things you only have one of. Neck and back sux
deleted by creator