|
Thread: Equipment Crew Tech Thread
-
12-15-2018, 05:38 AM #601
-
12-15-2018, 10:42 AM #602
Yes, it has a fan.
I can hear it when I put my ear next to it. It's way, way quieter than anything else with a fan or mechanical drive in my office. If you were to place it in a very quiet room, I would guess that you might be able to hear it when the fan is running. My office has too much other fan noise for me to be able to tell though.▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
12-15-2018, 11:18 AM #603
It is not the office SW that would be the issue
It is the OS on prepackaged laptops
For this purpose I would get one with at least 8 GB RAM
If the OS is Windows 10 then either 12 GB air 16GB would be better
As per the models ,I am in the market as well windows one
Swimmer is there a budget for the laptop ?[M]===[6]▪ Mech6 Crew #35 ▪[M]===[6]
[]------[] York Barbell Club #80 (DD)[]-----[]
-
12-15-2018, 12:46 PM #604
FWIW, I purchased an HP laptop for my son last Christmas. The key specs are:
- 17.3 inch 1920x1080 IPS screen (IPS displays are better for color rendition, but they cost more)
- Intel Core i7-7500U processor: 2.7 GHz base frequency, 3.5 GHz turbo, 2 cores / 4 threads, 4 MB cache (SmartCache, whatever that is)
- 16GB RAM
- 1 TB hard drive - I think it's a mechanical drive, not an SSD
- Windows 10 Home 64-bit
Here's a link: https://www.amazon.com/gp/product/B07551S9CJ/
Given that it's a 2017 model, I'm a little surprised to see that it's still available.
My son uses it daily. He's been happy with it. I think he's only had one issue with it. A few months ago, the keyboard became flaky - the space bar had stopped working consistently. It was repaired under warranty at no cost to my son. I was impressed - they sent him a box with pre-paid shipping label attached. My son shipped it to the service center and it was shipped back in a few days after being repaired. (HP has even better warranties available too. My daughter has a much more expensive HP laptop on which the screen died after the laptop was dropped. HP sent someone to fix it. I had a similar warranty experience with an HP laptop that I own - on mine the keyboard had stopped working.)
With regard to minimal specs, here's what I would look at:
Processor: Get a processor with two or more cores. As for AMD versus Intel, I don't think it matters much right now. I've been typing in searches like "AMD ryzen 2950x vs Intel Core i9-9900k" into google and then looking at the results from cpu.userbenchmark.com.
RAM: 16GB minimum. Your OS or application might not actually need that much right now, but it might in two or three years time. Plus, the OS is able to use RAM for caching blocks from files on disk. (Linux has worked this way for years; I would expect that Windows 10 does something similar.)
Disk: Get an SSD instead of a mechanical hard disk. It'll be faster and more reliable too. As for capacity, get at least 500GB. It'll be more future-proof if you get 1TB or more.Last edited by KBKB; 12-15-2018 at 12:53 PM.
▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
01-03-2019, 01:31 PM #605
RAM fans?
Okay, so in addition to the Intel NUC, I also built a much larger machine over the holidays.
The RAM that I purchased came with a pair of fan assemblies which are supposed to be mounted above the RAM. I haven't done this yet, mostly because I think that it'll clutter things up. I'm also skeptical that it'll do much to improve stability or operating temperatures. I'm not overclocking, nor do I plan to.
Has anyone here done a build that includes cooling fans for the RAM?▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
01-14-2019, 09:49 AM #606
Okay, so that ADATA DOM (Disk-On-Module) that I used in my FreeNAS machine has died. It may have been bad for quite a while already - sadly, with these things you sometimes don't find out until you try to boot and see some problems.
I had the boot drive mirrored with a USB flash drive, so I didn't lose anything, though it did take me a while to figure out which of the drives was bad. Once I worked that out, it seems that FreeNAS automagically fixed the problem for me by grabbing another of the USB sticks that I had installed and then adding it to the ZFS mirror for me.
Between then and now, M.2 has gotten a lot more popular. I'm using some Samsung 970 EVO cards (M.2 form factor) in my new build. That would have been a nice option for the NAS that I built back in 2016.
In the course of building my new machine, I did some work related benchmarking. I timed the building of an open source project that I work on and found that with both sources and build tree in an NFS mounted directory backed by ZFS, the build was taking nearly 20 minutes to complete. When I built on a "local" disk (though in a virtual machine and thus still backed by ZFS), the build took around two minutes. I added an Intel Optane device (which also connects to an M.2 slot) for use as the SLOG/ZIL and also for the L2ARC. Adding the separate ZIL (ZFS Intent Log) made a tremendous difference on performance. That operation that had previously taken about 20 minutes to complete can now be done in under 3 minutes. The performance isn't quite as good as running it on a local disk, but it's pretty close.▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
01-14-2019, 12:07 PM #607
- Join Date: Dec 2013
- Location: Louisiana, United States
- Posts: 5,810
- Rep Power: 20731
What ram is it? Corsair Dominator or some other high end kit?
Could you link the kit? A lot of high performance, low latency kits require higher voltage, so fans are required for stability. I always try to get air flow over my ram kits because I usually tweak settings & bump voltage. Ram waterblocks exist for this reason. I've never used one though.Crews: Ivanko Barbell Crew #52, York Barbell Club #95, Equipment Crew #59
Lifts no one cares about:
SQ: 619x1 (suit bottoms, no belt) / 507x1 (raw, no belt)
BP: 392x1 (pause bench, raw)
DL: 500x1 (suit bottoms, no belt)
-
01-14-2019, 01:02 PM #608
This is Amazon's title for it: Corsair Vengeance LPX 128GB (8x16GB) DDR4 DRAM 2666MHz (PC4 21300) C16 Memory Kit - Black
Here's the link:
https://www.amazon.com/gp/product/B019HVRT5G/
I haven't done any OC type of tweaking.
When I purchased the RAM, I didn't know that it would come with fans, though after looking at Amazon's page again, I see that they're shown midway down the page.▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
01-15-2019, 04:35 PM #609
- Join Date: Dec 2013
- Location: Louisiana, United States
- Posts: 5,810
- Rep Power: 20731
I think the Dominator kits used to come with fan kits (ram in excess of 2.0v back in ddr2/3 days).
Corsiar makes a "dominator platinum" and "Vengence" ram coolers. They are $67 and $31 respectively on amazon.
How is your case's airflow? Is there air moving over the ram? If so, you should be fine.
Are you experiencing crashing or stuttering?Crews: Ivanko Barbell Crew #52, York Barbell Club #95, Equipment Crew #59
Lifts no one cares about:
SQ: 619x1 (suit bottoms, no belt) / 507x1 (raw, no belt)
BP: 392x1 (pause bench, raw)
DL: 500x1 (suit bottoms, no belt)
-
01-15-2019, 08:46 PM #610
Airflow is pretty good, I think. I bought a gaming case - this one: https://www.amazon.com/gp/product/B0058P5S9A/ . It has a fan pulling air in at the front and three fans expelling air at the side, rear, and top. I also installed two of SuperMicro's mobile racks that hold five desktop (3.5") drives each. As you know, these bays have two fans apiece which pull air from past the drives from the outside. The only other fan is the CPU cooler fan - that one is a 140mm fan from Noctua. Here's a link for the heatsink/cooler: https://www.amazon.com/gp/product/B074DX2SX7/ .
I would guess that there is air moving past the RAM, though the sticks are arranged orthogonal to the airflow. It's entirely possible - likely even - that there are spots which are hotter than others.
With regard to crashing - I've only had one crash that I can't explain. There have been a number of other crashes which were due to me not doing the iommu/vfio passthrough correctly. I have to tell the linux kernel not to grab two of the video cards that I'm using for VMs in the machine. When I fail to do this and then start one or both of the VMs, the system will crash after about ten minutes - a message is printed to the console. (After having gotten things working, I decided that I wanted the boot/root partition for the host OS/hypervisor to also be ZFS. In the course of getting it working, the iommu/vfio passthrough stuff stopped working which led to quite a few crashes until I disabled the automatic boot-up of the VMs.)
Hmm... I don't think I've said much about my new machine yet.
In mid-December, I got the itch to build a new machine. I had been thinking about updating my Windows machine for a while, but I had built it in a mini-ITX case and wasn't able to find a motherboard which maxes out at 64GB of memory. My linux desktop was even older, and I kind of wanted to update it too.
I decided to update both by building just one new machine. I had been hearing about running VMs where the VM directly (more or less) controls a graphics card and that sounded like a fun project to me. So the host OS is CentOS 7.6 running a recent linux kernel. I'm also running two VMs - one is running Windows 10 and the other is running Fedora 29. Both of these OSes have control over one video card. There's a third video card too so that I can see console messages from the host/hypervisor. The video cards for the guest OSes both use AMD chipsets; one's an RX 580, the other is an RX 570. The host OS's video card is a less expensive Nvidia card (GT 710).
CPU is an AMD Ryzen TR 2950X - 16 core / 32 thread. 128GB RAM (as mentioned earlier) ; I wish now that I would have gotten ECC memory, but I didn't know at the time that the motherboard that I ended up using would support it - it does. 8 (old) 4TB drives that I re-purposed for this project; they're put into a pool of just one vdev using RAIDZ3. There's still room for adding another 8 drives if needed. Motherboard is another gaming part: ASRock X399 Professional Gaming sTR4. I also put in a HBA for driving extra disks. Oh, and for ZFS's ZIL/SLOG and L2ARC, I got an Intel Optane 905P. Use of a fast separate device for the ZIL has made a tremendous difference in performance for some of my workflow.
I'm really bad at staying up to date on my Linux destop, mainly because I end up with a lot of stuff open and don't want to reboot and lose all of the context. Also, when I install a new version of Fedora, it usually takes a day or two to sort out all of the things which should work, but no longer work. From now on, I figure I'll be able to install a new release in a VM and give it a try in a VNC window. Once I get everything installed and am happy with it, I can deploy it by connecting it to the three PCI devices I'm presently using for my Linux desktop. If it doesn't work out, I still have the old VM image and can just continue using it instead.
I have two ZFS pools on the host; one is for providing storage to the VMs - it's formed from the eight hard disks mentioned earlier. The other is a mirrored pool for the host's root filesystem. I create virtual disks for the VMs using a command like "zfs create -V 2TB tank/windows10-vm". I have automatic snapshots set up so that I can roll back if needed. So, if some less than desirable update happens to the OS in any of my VMs, I can restore back to some known working point. I can even preserve the "future" point and create a clone of the volume at some point in the past to see how things behaved. I wanted the same kind of protection for the host/hypervisor too, so I also put the host's root filesystem in its own mirrored pool.
Home directories and other data (e.g. my photo files) are off on the NAS that I built. I recently put in a 10G link between these two machines. In my testing of large file transfers, the bottleneck isn't the network connection, it's writing to the disk.▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
01-30-2019, 09:41 PM #611
I recently started renting a cheap VPS from interserver.net. One core, 2GB of RAM, 30GB of SSD space, and 2TB of data (in/out over their network) per month. It's costing me $6 per month, though I got something of a discount having paid for the year.
My intent was to forward tcp ports 80 and 443 to one of my VMs at home upon which I'm running Nextcloud. My ISP blocks port 80 and port 80 access is needed to use the Letsencrypt ACME client for obtaining a (free) SSL certificate. I since found out that there are other ACME clients (such as acme.sh) which can verify control of a machine using only port 443. In any case, I got it working both ways - with port forwarding through the VPS and also without, by using acme.sh.
The point of running Nextcloud is to provide my daughter and other family members a place to back up their data. My daughter had an incident with her laptop last year which necessitated (according to the help that she got) a re-installation of Windows. She lost some work though she claims that it wasn't that much.
Once the Nextcloud client app is installed on your laptop, all you need to do is drag files into the "Nextcloud" folder to synchronize them to the server. Also, any other devices that you have connected in a similar fashion will see those same changes in their Nextcloud folders. You could, I suppose also just do all of your work in the Nextcloud folder - every time you make a change, it gets synchronized to the server and all other clients too.
Syncthing provides similar functionality, but it (IMO) isn't quite as polished yet. Nextcloud also provides a lot of other functionality, including such things as video conferencing.
The VPS that I'm using has been a target of hacking attempts. When I logged in yesterday, I saw:
Code:Last failed login: Tue Jan 29 16:15:14 MST 2019 from shpd-178-69-238-69.vologda.ru on ssh:notty There were 47 failed login attempts since the last successful login.
And when I logged in today, I saw this:
Code:Last failed login: Wed Jan 30 15:53:25 MST 2019 from 155.94.181.2 on ssh:notty There were 74 failed login attempts since the last successful login.
▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
06-22-2019, 07:41 PM #612
-
06-23-2019, 08:44 AM #613
- Join Date: Apr 2013
- Location: Kansas, United States
- Age: 37
- Posts: 22,393
- Rep Power: 94891
-
06-23-2019, 10:35 AM #614
-
06-24-2019, 02:11 PM #615
- Join Date: Dec 2013
- Location: Louisiana, United States
- Posts: 5,810
- Rep Power: 20731
The old ryzen stuff is pretty cheap. I wouldn't get less than a 6c/12t cpu at this point for really cheap.
Seems like browsers eat up ram and 8gb isn't enough these days.
I am waiting to see how results stack up. I'll either get the 3950x and OC it or wait for threadripper. I don't need more than 16 cores, but I am wondering if more PCIe lanes for NVME drives will be worth it. I was going to consider the Asus Crosshair 8 Forumal (I want to WC the VRMs and that saves me a lot of time), but $700 for a mobo is asking a LOT. If threadripper boards are not really anymore expensive than am4 boards this time, it might be worth it to jump to TR4. I'd have a solid upgrade path of either high core count CPUs in the coming years or at least 1 more generation if I want to do a cpu upgrade. I doubt PCIe 4 will get maxed out in the coming years, considering we don't even max out PCIe 3.0 on the consumer side w/ any video cards, I can't see the spec getting pushed away for pcie 5.0 immediately.
Thoughts?Crews: Ivanko Barbell Crew #52, York Barbell Club #95, Equipment Crew #59
Lifts no one cares about:
SQ: 619x1 (suit bottoms, no belt) / 507x1 (raw, no belt)
BP: 392x1 (pause bench, raw)
DL: 500x1 (suit bottoms, no belt)
-
06-24-2019, 03:14 PM #616
I think it's definitely worth waiting for independent benchmark results. PCIe 4 sounds really cool, but it might be worth waiting for at least one motherboard generation (or a bunch of positive reviews) for that to shake out too.
As for AM4 vs TR4... has AMD made any announcements of new products which will use the TR4 socket? I would guess that the greater number of contacts (just over 3X) on the TR4 would allow for more PCIe 4 slots. (Just a guess though - I haven't studied the pin-out to see what the contacts are actually used for.)▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
06-16-2020, 03:09 PM #617
Streacom BC1 Open Benchtable
Below is a photo of a recent build on top of a Streacom BC1 Open Benchtable. The benchtable made it easy to do bring-up / testing.
The only problems I ran into while using it were: 1) One of the PCIe standoffs had messy threads which I had to clean up with a die before I could screw it into the benchtable. 2) I ran out of 6#32 thumbscrews; they provide six of them; you're supposed to use three to attach the power supply, but if you want to attach two disk drives to the bottom of the chassis, you need four more, so you'll be one short.
▪█─────█▪ Equipment Crew #35
-!!!---!!!- No Excuses Homemade Equipment Crew #14
-
06-16-2020, 05:44 PM #618
Similar Threads
-
OFFICIAL: Health Care Thread. Health Care Crew Unite!
By .:Chris:. in forum Education/Career/FinanceReplies: 7952Last Post: 04-06-2021, 11:00 PM -
Equipment Crew- Part IV
By Keetman in forum Workout EquipmentReplies: 6787Last Post: 03-22-2021, 05:44 AM -
Equipment Crew- Part II
By animalfan in forum Workout EquipmentReplies: 9882Last Post: 09-21-2012, 05:48 PM
Bookmarks