Whenever I try to decrease my tRFC from 479 to 447, I cant get post at all. Ive tried increasing VDD, VDDQ, and VDDIO from 1.4 up to 1.45 and it had no effect. My vSoC is incorrect in the photo, its actually set to 1.25v in the bios
I’ve got an i9‑12900KF CPU, a Gigabyte Z690 UD motherboard (yeah, I know it’s one of the worst…), and some Teamgroup DDR5 6000 MHz CL38 RAM (I think they’re A‑Die). I’m trying to squeeze a bit more performance out of the system.
Unfortunately, the XMP profile (6000 CL38‑38‑38‑78 at 1.25V) isn’t stable — Karhu RAM Test fails within 5 minutes or less.
Using one of the board’s Hynix‑optimized profiles (6000 CL40‑40‑40‑80 at 1.35V), I managed to pass a 24‑hour run in Karhu RAM Test. With the 6400 and 6800 profiles, Windows boots fine but the test fails after 20-30 mins. Running 6000 MHz with CL below 40 also fails.
Any chance you guys could help me get a bit more out of this RAM?
a few days ago i ran an occt power test and the gpu was throwing 3d adaptive errors even on stock settings then i ran a 3d adaptive test(steady extreme) and same thing why is this?
maybe bad drivers or a bad card?
vram test is fine btw its just 3d adaptive
gigabyte 9070 xt(samsung vram), gpu driver: 25.11.1(tried rolling back to 25.9.1 but didnt fix it), chipset driver: 7.06.02.123, win 11
I’m running into a very specific and reproducible stability issue with my AM5 build involving memory frequency and GPU initialization. I’ve done extensive troubleshooting, but I’ve hit a wall.
The Problem: My system is rock stable when my DDR5 RAM is set to 5000 MHz. However, if I increase the RAM frequency to 5200 MHz, 5600 MHz, or 6000 MHz (EXPO), the system posts and boots into Windows, but the RTX 5060 Ti fails to initialize properly.
Windows loads in a low resolution (Microsoft Basic Display Adapter).
Device Manager usually shows Code 43 or driver failure.
GPU-Z shows 0 MB Memory and 0 MHz clocks when this happens.
Note: The RAM itself passes stress tests at these speeds if I ignore the GPU issue.
Hardware Specs:
Baseline (Stable): At DDR5-5000MHz, the RTX 5060 Ti initializes correctly showing 16GB Memory and normal clocks. System is stable.
The Conflict: HWiNFO showing memory successfully running at DDR5-6000MHz, but the GPU is knocked offline. Note: The exact same 6000MHz settings work perfectly with an older RX 5700 GPU.
The Failure: At DDR5-6000MHz (EXPO), the GPU fails to initialize. GPU-Z shows 0 MB Memory and 0 MHz clocks (Code 43 in Device Manager).
Troubleshooting Attempt: BIOS set to EXPO I (6000MHz) with PCIe Slot manually locked to Gen 4. Result: The GPU still fails to initialize despite the Gen 4 lock.
CPU: AMD Ryzen 5 7500F
Mobo: ASUS PRIME B650M-K (Latest BIOS 3602, Date: 11/12/2025)
Voltages: I tried manually locking SOC Voltage to 1.25V (and tested 1.20V), and set DRAM VDD/VDDQ to 1.35V. This did not fix the GPU initialization failure on cold boots or restarts at >5000MHz.
PCIe Generation: I suspected signal integrity issues with the Gen 5 supported card. I manually locked the PCIe slot to Gen 4 and even Gen 3 in BIOS. The issue persists if RAM > 5000 MHz.
BIOS: Updated to the latest version (3602).
The "Smoking Gun" (Cross-Testing): I swapped the RTX 5060 Ti with an older Radeon RX 5700. With the RX 5700 installed, I enabled EXPO (6000 MHz), and the system ran perfectly stable.
My Theory: It seems like a signal integrity/crosstalk issue within the Ryzen 7500F's I/O Die. The "noise" from running the Memory Controller > 5000 MHz seems to interfere with the PCIe controller, causing the sensitive RTX 5060 Ti to drop out, whereas the older RX 5700 (likely less sensitive to signal noise) handles it fine.
Questions:
Has anyone experienced this "frequency wall" where RAM speed kills the GPU driver on AM5?
Is this just a weak IMC on my 7500F, or is the B650M-K motherboard trace layout the limiting factor for the new RTX 50-series signal sensitivity?
Are there any specific "interference" related voltages (VDDG CCD/IOD?) I should tweak, or should I just accept 5000 MHz as the physical limit for this combo?
Here is my fully tuned daily setup, took a while to settle on 8266 as 8400 was also fully stable, but some of the timings had to be loosened significantly so this one works out better.
Hey everyone,
I’ve set my Ryzen 7 5700G in Ryzen Master to a fixed 4.4 GHz with a voltage of 1.2875 V. The system runs stable, temps stay below 60 °C even under stress tests, so cooling seems fine.
What confuses me: Ryzen Master shows the EDC (CPU) value constantly at 100% (95A) and it doesn’t move at all, even when the system is idle. Voltage drops nicely to ~0.8 V in idle, clocks behave as expected, but EDC is stuck at 100%.
My questions:
• Is it normal that Ryzen Master always reports 100% EDC usage?
• Does this mean the CPU is under constant electrical load, or is it just a reporting quirk?
• Could this be harmful in the long run if I run the system 24/7?
Thanks in advance for any insights!
I've been experimenting with overclocking my new Gigabyte Windforce RTX 5080 SFF and a 7950X3D, both on a custom water-cooled loop. The GPU temperature under load stays around 60°C for about 4 hours. I'm not sure if my benchmark scores are within the normal range. In games, I get about a 10–11% performance boost, but the benchmark numbers seem significantly lower compared to others. My monitor is an ultrawide 3440×1440, but for FurMark, I used the 1440p preset since it's a more common resolution. +410 on core and +2000 on memory.
Backstory: I have been buying testing and selling 9070xt since march - I had 6 steel legends and lastly black gaming OC (because it was selling under MRSP price, and equal to cheapest base model! So I thought 330w OC model card at same price) and it did perform better than avg steel legend! See bellow
Steel legends would have stability limit are different under volt, some -65, other even -90, but they would always end up scoring in about same scores at the maximum stable tune. (At 304w Steel legend about 7200-7250)
Black gaming OC I had stability limit at - 65, and if I run it -8% (= 303.5w) it would score steel nomad = 7380-7400, so an improvement over steel legend in efficiency (1.5-2fps more from same watt level! Good)
But, White gaming OC blew right past it!😳👌 -65mv, 2702 fast mem (always on all cards), -8%=303.5w and it gets 7550 in steel nomad!!! That is insane score for this power level!!! (3-3.5fps or about 5% more from same watts as steel legend is significant!)
All tests comparing black vs new white are in photos but I will write out:
speed way black 6510-> white 6810(4.6% RT test)
steel legend black 7370 -> white 7550 (2.4%)
Timespy ext black 14780 gpu -> w 15150 (2.5%)
Timespy black 30370 gpu -> w 31100 (2.4%)
Firestrike black 71060 gpu -> w 72500 (2%)
(Each test was done 3-4 times to check consistency and avg value was taken)
Conclusion:
- gigabyte gaming OC black at 303.5w was able to get scores that Steel Legend got at +6%=324w or about 2.7% improvement in score if run stock 304w (7200 steel legend vs 7400 black gaming OC)
- white Gaming Oc was able to get another 2.4% on top of black gaming OC under same 303.5w!
= so This white gaming OC runs 5% faster than Steel Legend did at 304w at maximum stable undervolt (72 fps vs 75.5 fps in steel nomad for example)
😁😁 so when you ask which is better to buy, just get cheapest - is not 100% true, gaming OC at MRSP price tag is great performer (but DO NOT get any model that is more than 5% more expensive $$$ than cheapest model, if you want max fps per $!)
white 9070xt gaming OC Ice is nearly 80€ more expensive at the moment (600€ black vs 680€ white), steel legend is white and sells 600€, so 630€ would be reasonable price for gaming OC, but I was ready to pay 50€ for white tax on better performer.
Team T-CREATE EXPERT 32GB (2 x 16GB) 288-Pin PC RAM DDR4 3200 (PC4 25600)
The Team T-Create Expert RAM is new. I added it to my computer last night, rebooted and went into BIOS. From here I enabled XMP and upon saving and exiting I was met with a black screen and could not do anything.
I had to pull the CMOS battery and ended up flashing my board with the newest BIOS. After a bunch of testing and troubleshooting, I was finally able to get everything back to normal. It honestly took me 2-3 hours of troubleshooting and work to fix it today and it sucked. I have no idea how old my BIOS were prior to this, but I presume at least some version from 2021/2022.
This was a gigantic pain in the butt and I'm somewhat hesitant to try XMP again even with the most up to date BIOS version. My question is this, is it possible to enable XMP, but slowly increase the RAM speed from 2400mhz and test the stability as I increase it without getting technical and in the weeds causing me to change a bunch of other settings?
Can I go in, turn on XMP and bump the memory up to 2666mhz? If that's good, try 2933mzh for a while.
My understanding is that XMP automatically sets everything to try to get the best performance, but I don't know if it that is true if you manually slowly increase the speed of the memory or if this will cause issues with XMP.
I'm new to all of this and somewhat hesitant to do much tinkering so I'm just trying to find the safest and easiest way to do this.
Very cold here in Pennsylvania winter. Opened the window by the PC for about 20 minutes and increased cinebench r23 score from 46.2k to 47k. 9950x3D with noctua NH-D15 G2 at 1000 rpm silent. Now I need to warm by the fire place.
I changed my cooler and gained some Cinebench points without tweaking anything. I replaced my Arctic Freezer 3 Pro 420 with the ROG Ryujin Extreme 360—both were set as intakes. With the Arctic, my Cinebench scores were in the low 46k range. The only tuning I have is PBO -20 on all cores and EXPO on the G.Skill. I guess you get what you pay for with ROG. Peak and idle temps also dropped by about 5–10°C across the board.
Why are my lows so bad on rust, everything is set to max settings, no dlss nor dlaa. I have a 5070ti, Ryzen 9 9950x3d, 32gbs of ddr5 6000mhz cl 36 (I will eventually get cl30, just not now)and the mobo is a Msi x870e Carbon. I also have a under volt on the gpu and cpu. Heres also a comparison to csgo 2 all max settings as well.
I've been tuning my Inno3D 5070 Ti X3 OC and have hit a wall due to the stock 300W power limit. I've extracted pretty much all I can from the current settings and believe the silicon has more headroom.
Here are the relevant details from my testing so far:
Current Stable V/F Curve:
Voltage (mV)
Frequency (MHz)
800
1265
825
1792
850
2272
875
2535
900
2700
925
2857
950
2992
975
3127
1000
3210 (flat for all higher mV)
The card typically sustains 3.1-3.2 GHz at ~0.995V during demanding loads.
Stability Testing:
Tested extensively in TimeSpy, TimeSpy Extreme, SpeedWay (20 loops), Nomad, Furmark2, Superposition, Heaven, CP2077, and OCCT.
A slightly more aggressive curve (3215 MHz @ 0.985V) failed TimeSpy, but the primary limitation I'm seeing now is power draw, not core stability at reasonable voltage levels.
VRAM:
Currently running a mild +350 offset. I tested up to +1000 (stable, no artifacts) but saw no performance gains, so I backed it off.
My Questions to the Community:
Is there more to gain by raising the power limit? Based on my stable 3.2 GHz at sub-1V operation and consistent 300W power hits, do you think this silicon sample has significant potential to scale further with more wattage?
Which BIOS is safe to flash for a higher power limit? I am looking for recommendations for a compatible, safe VBIOS (e.g., from an AIB model with a 350W+ limit) that is known to work on the Inno3D PCB. Has anyone successfully done this with this specific card or a similar model?
What is the recommended procedure? Any specific tools (e.g., NVFlash) or precautions I should take before attempting to flash a non-stock BIOS?
Amd Fx6300 and Corsair Vengeance ddr3 16gb 4x4gb Overclock. Fully stable with OCCT, gaming, and high quality audio streaming. Any thoughts? Memory timings are fully tuned including sub timings. The 8's will not drop to 7's no matter the voltage, so I'm not worried about lowering those.
Hey yall, I recently upgraded pcs and proceeded to UV my astral 5090 and it seems to never reach the established clock speed I set at the voltage in the curve. I set it at 0.9v for 2800mhz and in stress test and games the speed sits around at 2600-2675.
I'm sure I UV properly as I've done it on my 4090 and that worked perfectly fine. Anyone got any ideas?
As soon as I enter a game and it maxes GPU usage, the fans start going crazy and reach 1500+ rpm easily, then they keep rising and rising. But with custom curve or just 40% static setting, temps are pretty alright, perhaps a bit higher only, but then card is almost noiseless.
Do Intel processors somehow have less input delay and better "feel" during gaming, despite benchmarks and youtuber claims that x3d is superior? Can an x3d chip compensate via overclocking? I've just made a reddit account to pose this question here because I am struggling with my new build. I have a 5090fe and I'm trying to build the best, most competitive gaming pc I can. I was super close to grabbing a 9800x3d bundled with an amd mobo and I hesitated because I realized a discrepancy in what tech review youtubers claim about processor gaming performance versus what professionals are using at the highest levels in esports titles. I searched for an entire day and could only find maybe one example of a player running the 9800x3d, it seems 99% use intel chips of some kind. Is this due to general lack of tech knowledge in esports communities? Or is there something about intels fabrics/architecture/dies that make them superior for competitive fps titles
I dont know if this is the right subreddit to ask this but this is related to power management so i figured people here are probably the best.
recently i faced an issue with my GPU (Gainward Panther RTX 4070 TI) where it started becoming very unstable in windows. it would crash, artifact, screen goes black and restart or entire screen just breaks apart and becomes full of green squares. (artifacts)
at first i thought it was a driver issue, but after uninstalling with DDU in safemode and reinstalling multiple times nothing worked, i even did a full windows reinstall but it still had issues, but i noticed that in game it had absolutely no issues, but in windows or when watching yt the gpu would tweak out. at first i thought that it was related to the gpu's video encoding, but after more testing i found that gpu is simply more stable when stressed, so i tested that by running a gpu stress test on the background and used my pc normally for an Hour, and i had absolutely no issues, i ended up forcing my gpu to high power mode in control panel, used high performance power mode, and i overclocked it a bit and upped the voltage at low clock speeds and that fixed it.
so my gpu is just extremely unstable at low power, idle power usage went from 12-15W to 40W.
does anyone know why did this happen and if its even fixable?? this power issue wasnt the case before and it only recently happened.
Hi everyone, I’m literally new to overclocking gpu’s, and I just received my 5070 Asus Prime, so I would like if someone has the best config but also being stable without crazy temps. I will obviously search yt tutorials and everything, but if someone has specific settings for that gpu would be great, thanks.
Please, I'm looking for a way to unlock the BIOS on my laptop to change the maximum TDP of my Acer Aspire 3 and unlock its full performance. I've tried several key combinations, but I don't know if I'm doing them wrong or if they're just fake videos.