Quantcast
Channel: VMware Communities: Message List
Viewing all 231052 articles
Browse latest View live

Re: Latest gcc leads to `incompatible gcc/plugin versions`

$
0
0

The solution for this problem was (running Manjaro):

 

Remove vmware-workstation, remove all old kernels, install latest kernel (in my case 5.3.8-1), re-install vmware-workstation (downloaded from vmware webpage) - works


Re:Export to CSV a list of VMs with specific custom attribute values

$
0
0

Do you get the error for all VMs? Or only for the ones with a specific value in attribute 1 and 2?

How do the attributes show in the Web Client for these problematic VMs?

 

Which PowerCLI version are you using?

Re: Vcenter Server 5.5

$
0
0

If VMware wanted the software to be available, they would make it available. But they don't do that, and this is their official forum - what happens elsewhere on the internet is irrelevant in this case.

 

I'm one of the forum moderators, and in these sorts of circumstance it would generally be the case that your comments would be deleted - most certainly if you posted a link to the software.

 

So while our opinion may differ, my comments are made taking the forum guidelines into consideration.

Re: Why does my Keyboard become non responsive on Mac side when running VM?

$
0
0

Moderator note: Technical/product issue, moved to the VMware Fusion area.

Re: Vcenter Server 5.5

$
0
0

Thanks Scott. I am off this thread.

How to Achive One device One HBA to one Path

$
0
0

Hi All,

 

There is scenario where storage Admin messed up the configuration. They have presented 1500-1600 path which is more then our limit 1024. Now I can see few devices from 4,3,2,1 HBA's instead of all 4HBA's. It's very risky to change it from storage end.

 

Is there any way we can restrict one HBA to all devcie to via one path?(for example if I have 4 HBA and 100 devices then i want every devices should be coming on all 4 HBA's via means 400 path and 100 path per HBA no more).

 

Thanks
Suresh Siwach

numa.autosize.vcpu.maxPerVirtualNode lack of info

$
0
0

I have a VM with 64 vCPUs and 512GB RAM, it’s a massive db. ESXi 6.5 are HP DL560 with 4 sockets (22 cores each + HT) and 1.5 TB RAM. According to Frank Denneman's book vSphere 6.5 Host Resources Deep Dive I aimed at keeping the VPD onto as little psockets as possible. By disabling the Hot add CPU feature and by adding the preferHT set to True I increased performance quite a lot and I expected to see cores spread onto two physical sockets. However while the VMWare KB 2003582 states how to implement the preferHT setting it does not mention something Frank Denneman did say in his book:

Quote:

“Please remember to adjust the numa.autosize.vcpu.maxPerVirtualNode setting in the VM if it is already been powered-on once. This setting overrides the numa.vcpu.preferHT=TRUE setting”

End quote

 

I read the above after I did the initial changes to the VM and I have now noticed that its numa.autosize.vcpu.maxPerVirtualNode value is 11. According to Virtual NUMA Controls I should get 6 virtual nodes by dividing 64 by 11,but I see the VM has 7. This is another thing I don't understand.

following which criteria do I adjust numa.autosize.vcpu.maxPerVirtualNode value?

Shall I set it to 44 as it is the max number of logical cores in a physical socket? Or shall I disable it and let the system do its best decision? If yes how do I disable it? This is the current layout of the cpu resources of the vm:

 

although performances have improved I’m not happy with the distribution of the cores. Specially considering that homeNode 3 is not used at all.

 

 

So, to recap my question to the experienced admins are the following:

 

1. following which criteria do I adjust the numa.autosize.vcpu.maxPerVirtualNode value so that the preferHT setting is enforced correctly?

2. why homeNode 3 is not in use at all?

3. I knew that in 6.5 the coresPerSocket setting was decoupled from the Socket setting, so it does not really matter anymore if you set 12 sockets x 1 core or 1 Socket x 12 cores (unless licenses rectrictions are in place). However in Frank Denneman's book I read:

quote

"If preferHT is used, we recommend aligning the cores per socket to the physical CPU package layout. This leverages the OS and application LLC optimizations the most "

end quote

So, in this case the use of CoresPerSocket is effective? Then I should set 2 Sockets x 32 coresPerSocket? Option that frankly I haven't seen available in the VM Settings window

4. Why the VM has 7 virtual nodes instead of 6?

Re: Very slow speed on SSD

$
0
0

My Assignment Services is no. 1 when it comes to providing online assignment help in australia. We have a team of expert writers who are qualified in every level of econometrics, from basic to advanced econometrics.


Re: Mac Catalina and Fusion Has Black Screen

How to get vRA Compute Cluster Location Tag?

$
0
0

I am having hard time to find an API to list location tags of all Compute clusters that is seen in vRA Infrstructure--> Compute Clusters.  Tried vRA 7.3 API, https://docs.vmware.com/en/vRealize-Automation/7.3/vrealize-automation-73-programming-guide.pdf , could not find, all it has is reservations that list compute cluster but no other details about compute cluster itself.  Any idea where I can find the API to list all compute cluster objects and the properties of each object that has location in it?  For reference here what I am trying to list.

 

vSAN strecthed cluster and SRM

$
0
0

Hi

We are using vSAN stretched cluster. Strat to look at SRM as site reocvery orchsteration motor. Does SRM requires 2 vCenters ?

Re: vSAN strecthed cluster and SRM

$
0
0

Yes, you need multiple vCenters.

Re: Vsan, странное поведение

$
0
0

В общем сегодня дважды смог воспроизвести данный баг.

Не то чтобы специально, но словил его дважды.

 

Есть еще пара хранилищ на NFS, решил перенести оттуда пару машин на vSAN, спусти минут 15/20 сетефой интерфейс перестает отвечать, в логи вот что пишет.

 

2019-11-02T10:17:04.713Z cpu6:66570)CMMDS: AgentRxHeartbeatRequest:1260: Agent replied to reliable heartbeat request.Last msg sent: 59 ms back

2019-11-02T10:17:09.714Z cpu6:66570)CMMDS: CMMDSStateMachineReceiveLoop:1131: Error receiving from 5ca87082-51f1-ea9d-61d3-80c16e23f318

2019-11-02T10:17:09.714Z cpu6:66570)CMMDS: CMMDSStateDestroyNode:676: Destroying node 5ca87082-51f1-ea9d-61d3-80c16e23f318: Failed to receive from node

2019-11-02T10:17:09.714Z cpu6:66570)CMMDS: AgentDestroyNode:1381: Lost master node (5ca87082-51f1-ea9d-61d3-80c16e23f318), can't handle that and will transition to discovery

2019-11-02T10:17:09.714Z cpu6:66570)CMMDSNet: CMMDSNet_SetMaster:1071: Updating master node: old=5ca87082-51f1-ea9d-61d3-80c16e23f318 new=none

2019-11-02T10:17:09.714Z cpu6:66570)CMMDS: CMMDSLogStateTransition:1309: Transitioning(5da02d30-a295-a5e4-3f35-0025906a91da) from Agent to Discovery: (Reason: Failed to receive from node)

2019-11-02T10:17:09.714Z cpu6:66570)CMMDS: UpdateDiscoveryInfoForNode:246: First time Init of discoveryInfo for node 5da02d30-a295-a5e4-3f35-0025906a91da as reported by node 5da02d30-a295-a5e4-3f35-0025906a91da.

2019-11-02T10:17:09.716Z cpu3:66583)DOM: DOMOwner_SetLivenessState:4961: Object e1f3365d-7dcf-7d01-a6d0-002590388072 lost liveness [0x43951e275080]

2019-11-02T10:17:09.717Z cpu3:66583)DOM: DOMOwner_SetLivenessState:4961: Object a290b65d-50f9-df05-4d74-002590388072 lost liveness [0x43951e1eacc0]

2019-11-02T10:17:09.717Z cpu7:66581)DOM: DOMOwner_SetLivenessState:4961: Object 41806e5c-508a-ab58-857c-00259099c4de lost liveness [0x43951e297a00]

2019-11-02T10:17:09.717Z cpu1:66584)DOM: DOMOwner_SetLivenessState:4961: Object 60a30c5d-f0c0-832e-2294-0025906a91da lost liveness [0x43951e35da00]

2019-11-02T10:17:09.717Z cpu7:66581)DOM: DOMOwner_SetLivenessState:4961: Object 39b3905c-8c58-fc78-44df-00259099c4de lost liveness [0x43951e352a80]

2019-11-02T10:17:09.717Z cpu1:66584)DOM: DOMOwner_SetLivenessState:4961: Object 5b29025d-8c4d-134e-d14d-002590388072 lost liveness [0x43951e0df0c0]

2019-11-02T10:17:09.717Z cpu3:66583)DOM: DOMOwner_SetLivenessState:4961: Object 95fc6e5c-3168-c459-8094-002590388072 lost liveness [0x43951e2c5300]

 

2019-11-02T10:17:10.713Z cpu5:66570)CMMDSNet: CMMDSNetGroupIOReceive:1799: Creating node 5ca87082-51f1-ea9d-61d3-80c16e23f318 from host unicast channel: 10.10.10.33:12321.

2019-11-02T10:17:12.714Z cpu5:66570)CMMDS: CMMDSLogStateTransition:1309: Transitioning(5da02d30-a295-a5e4-3f35-0025906a91da) from Discovery to Rejoin: (Reason: Found a master node)

2019-11-02T10:17:12.714Z cpu5:66570)CMMDS: RejoinSetup:2732: Setting batching to 1

2019-11-02T10:17:12.714Z cpu5:66570)CMMDSNet: CMMDSNet_SetMaster:1071: Updating master node: old=none new=5ca87082-51f1-ea9d-61d3-80c16e23f318

2019-11-02T10:17:12.714Z cpu5:66570)CMMDS: CMMDSAgentlikeSetMembership:508: Setting new membership uuid 13f0b55d-5ade-1957-bdd2-80c16e23f318

2019-11-02T10:17:14.157Z cpu5:66570)CMMDSNet: CMMDSNetGroupIOReceive:1799: Creating node 5da6c454-62b1-66c4-5fdf-00259099c4de from host unicast channel: 10.10.10.33:12321.

2019-11-02T10:17:30.828Z cpu5:68778)HBX: 2959: '95fc6e5c-3168-c459-8094-002590388072': HB at offset 3424256 - Waiting for timed out HB:

2019-11-02T10:17:30.828Z cpu5:68778)  [HB state abcdef02 offset 3424256 gen 281 stampUS 654540065858 uuid 5db35b02-358edc0c-5bfb-0025906a91da jrnl <FB 502000> drv 14.81 lockImpl 4 ip 192.168.71.224]

2019-11-02T10:17:33.750Z cpu18:68863 opID=70dfc418)World: 12235: VC opID lro-3591722-5f08d905-06-01-9b-8ed5 maps to vmkernel opID 70dfc418

2019-11-02T10:17:33.750Z cpu18:68863 opID=70dfc418)WARNING: com.vmware.vmklinkmpi: VmklinkMPI_CallSync:1303: No response received for message 0x5d6e on osfs-vmklink (wait status Timeout)

2019-11-02T10:17:33.750Z cpu18:68863 opID=70dfc418)osfs: OSFSVmklinkCall:231: vmklink call failed with: Timeout

2019-11-02T10:17:33.750Z cpu18:68863 opID=70dfc418)osfs: OSFS_VmklinkLookup:479: Error making Lookup VmklinkCall

2019-11-02T10:17:33.750Z cpu18:68863 opID=70dfc418)osfs: OSFS_Lookup:2579: Lookup error: file = 82fc6e5c-fcf8-bbc5-e79b-002590388072, status = Timeout

2019-11-02T10:17:33.751Z cpu12:2225397 opID=70dfc418)WARNING: VSAN: Vsan_OpenDevice:1055: Failed to open VSAN device '82fc6e5c-fcf8-bbc5-e79b-002590388072' with DevLib: Busy

2019-11-02T10:17:33.751Z cpu12:2225397 opID=70dfc418)WARNING: VSAN: Vsan_OpenDevice:1055: Failed to open VSAN device '82fc6e5c-fcf8-bbc5-e79b-002590388072' with DevLib: Busy

2019-11-02T10:17:33.751Z cpu12:2225397 opID=70dfc418)Vol3: 2602: Could not open device '82fc6e5c-fcf8-bbc5-e79b-002590388072' for probing: Busy

 

2019-11-02T10:18:56.107Z cpu20:2225627 opID=65caaa64)osfs: OSFS_MountChild:3913: Failed to probe OSFS child for device name '5eba6e5c-4dcd-7bf6-0155-002590388072': No filesystem on the device

2019-11-02T10:18:56.522Z cpu2:2225615 opID=70dfc418)Vol3: 1121: Couldn't read volume header from : No connection

2019-11-02T10:18:56.522Z cpu2:2225615 opID=70dfc418)Vol3: 1121: Couldn't read volume header from : No connection

2019-11-02T10:18:56.522Z cpu2:2225615 opID=70dfc418)Vol3: 1121: Couldn't read volume header from : No connection

2019-11-02T10:18:56.522Z cpu2:2225615 opID=70dfc418)Vol3: 1121: Couldn't read volume header from : No connection

2019-11-02T10:18:56.522Z cpu2:2225615 opID=70dfc418)osfs: OSFS_MountChild:3913: Failed to probe OSFS child for device name '82fc6e5c-fcf8-bbc5-e79b-002590388072': No filesystem on the device

2019-11-02T10:18:56.522Z cpu18:68863 opID=70dfc418)osfs: DebugDumpVmklinkResponse:787: {ID: 5d90; type:LOOKUP; pid:[    vsan]; cid:52165933297ec489-71adb8b215efc33f; status:No filesystem on the device; bufLen:0;

2019-11-02T10:18:56.522Z cpu18:68863 opID=70dfc418)osfs: OSFS_VmklinkLookup:492: Failure (p [    vsan], c 52165933297ec489-71adb8b215efc33f)

2019-11-02T10:18:56.522Z cpu18:68863 opID=70dfc418)osfs: OSFS_Lookup:2579: Lookup error: file = 82fc6e5c-fcf8-bbc5-e79b-002590388072, status = No filesystem on the device

 

 

 

2019-11-02T10:19:31.745Z cpu17:68850 opID=5e7de0f7)NFS: 2329: [Repeated 2 times] Failed to get object (0x43912791b356) 36 f71d5946 3f751b68 c4b5cc0d 2b9cb0de 8000a 0 1b 0 0 0 0 0 :No connection

2019-11-02T10:19:31.745Z cpu17:68850 opID=5e7de0f7)NFS: 2334: Failed to get object (0x43912791b386) 36 f71d5946 3f751b68 c4b5cc0d 2b9cb0de 8000a 0 1b 0 0 0 0 0 :No connection

2019-11-02T10:19:31.745Z cpu17:68850 opID=5e7de0f7)NFS: 2334: Failed to get object (0x43912791b356) 36 f71d5946 3f751b68 c4b5cc0d 2b9cb0de 8000a 0 1b 0 0 0 0 0 :No connection

2019-11-02T10:19:31.745Z cpu17:68850 opID=5e7de0f7)NFS: 2329: [Repeated 2 times] Failed to get object (0x43912791b356) 36 f71d5946 3f751b68 c4b5cc0d 2b9cb0de 8000a 0 1b 0 0 0 0 0 :No connection

2019-11-02T10:19:31.745Z cpu17:68850 opID=5e7de0f7)NFS: 2334: Failed to get object (0x43912791b386) 36 f71d5946 3f751b68 c4b5cc0d 2b9cb0de 8000a 0 1b 0 0 0 0 0 :No connection

2019-11-02T10:19:31.752Z cpu23:68843 opID=794a29f3)World: 12235: VC opID lro-3591721-4fcdfa5c-02-01-66-8edb maps to vmkernel opID 794a29f3

2019-11-02T10:19:31.752Z cpu23:68843 opID=794a29f3)osfs: OSFS_CreateFile:168: mkdir not enabled, failing mkdir request (pid: [    vsan], cid: 52165933297ec489-71adb8b215efc33f, childName: 83ba6e5c-0cec-47bb-064b-00259099c4de)

2019-11-02T10:19:31.764Z cpu23:68843 opID=794a29f3)osfs: OSFS_CreateFile:168: mkdir not enabled, failing mkdir request (pid: [    vsan], cid: 52165933297ec489-71adb8b215efc33f, childName: 83ba6e5c-0cec-47bb-064b-00259099c4de)

2019-11-02T10:19:31.775Z cpu23:68843 opID=794a29f3)osfs: OSFS_CreateFile:168: mkdir not enabled, failing mkdir request (pid: [    vsan], cid: 52165933297ec489-71adb8b215efc33f, childName: 83ba6e5c-0cec-47bb-064b-00259099c4de)

 

После того как передернул интерфейс на сбойной ноде, сразу началось восстановление кластера

 

2019-11-02T10:26:32.834Z cpu0:2227886)NFS: 2329: [Repeated 7 times] Failed to get object (0x439466c1b356) 36 f71d5946 3f751b68 c4b5cc0d 2b9cb0de 8000a 0 1b 0 0 0 0 0 :No connection

2019-11-02T10:26:32.834Z cpu0:2227886)NFS: 2334: Failed to get object (0x4394b571b3b6) 36 6c4fded0 190cf18b 263836c6 dd541de 8156000a 18 da1e2 0 0 0 0 0 :No connection

2019-11-02T10:26:36.118Z cpu3:67832)BC: 2471: Failed to write (uncached) object '.iormstats.sf': No connection

2019-11-02T10:26:38.959Z cpu10:66077)<6>igb: vmnic1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None

2019-11-02T10:26:39.730Z cpu3:66570)CMMDSNet: CMMDSNetGroupIOReceive:1799: Creating node 5ca87082-51f1-ea9d-61d3-80c16e23f318 from host unicast channel: 10.10.10.33:12321.

2019-11-02T10:26:39.730Z cpu3:66570)CMMDS: MasterAbdicateTo:4001: Abdicating to 5ca87082-51f1-ea9d-61d3-80c16e23f318, will transition in 5000 ms

2019-11-02T10:26:39.747Z cpu11:65629)NetPort: 1881: disabled port 0x3000002

2019-11-02T10:26:39.747Z cpu8:1407610)NetSched: 628: vmnic1-0-tx: worldID = 1407610 exits

2019-11-02T10:26:39.747Z cpu11:65629)Uplink: 10095: enabled port 0x3000002 with mac 00:25:90:6a:91:db

2019-11-02T10:26:40.340Z cpu1:66286)NFS: 346: Restored connection to the server 10.10.10.1 mount point /tank/nfsxen, mounted as f71d5946-3f751b68-0000-000000000000 ("NAS_01")

2019-11-02T10:26:40.340Z cpu3:65828)StorageApdHandler: 507: APD exit event for 0x430c25768510 [f71d5946-3f751b68]

2019-11-02T10:26:40.340Z cpu3:65828)StorageApdHandlerEv: 117: Device or filesystem with identifier [f71d5946-3f751b68] has exited the All Paths Down state.

2019-11-02T10:26:40.368Z cpu1:66286)NFSLock: 578: Start accessing fd 0x43045a7f2558(.iormstats.sf) again

2019-11-02T10:26:40.416Z cpu1:66286)NFSLock: 578: Start accessing fd 0x43045a788348(ubuntu-18.04.2-live-server-amd64.iso) again

2019-11-02T10:26:40.416Z cpu1:66286)NFSLock: 578: Start accessing fd 0x43045a7fd958(WINDOWS_7_PRO_OA_CIS_AND_GE_GSP1RMCPRXFREO_RU_DVD.iso) again

2019-11-02T10:26:44.657Z cpu10:66570)CMMDS: CMMDSLogStateTransition:1309: Transitioning(5da02d30-a295-a5e4-3f35-0025906a91da) from Master to Discovery: (Reason: Abdication timer expired)

2019-11-02T10:26:44.657Z cpu10:66570)CMMDSNet: CMMDSNet_SetMaster:1071: Updating master node: old=5da02d30-a295-a5e4-3f35-0025906a91da new=none

2019-11-02T10:26:44.657Z cpu10:66570)CMMDS: MasterRemoveNodeFromMembership:6581: Removing node 5da02d30-a295-a5e4-3f35-0025906a91da from the cluster membership

2019-11-02T10:26:44.657Z cpu10:66570)CMMDS: UpdateDiscoveryInfoForNode:246: First time Init of discoveryInfo for node 5da02d30-a295-a5e4-3f35-0025906a91da as reported by node 5da02d30-a295-a5e4-3f35-0025906a91da.

2019-11-02T10:26:44.730Z cpu10:66570)CMMDSNet: CMMDSNetGroupIOReceive:1799: Creating node 5ca87082-51f1-ea9d-61d3-80c16e23f318 from host unicast channel: 10.10.10.33:12321.

2019-11-02T10:26:46.730Z cpu2:66570)CMMDS: CMMDSLogStateTransition:1309: Transitioning(5da02d30-a295-a5e4-3f35-0025906a91da) from Discovery to Rejoin: (Reason: Found a master node)

2019-11-02T10:26:46.730Z cpu2:66570)CMMDS: RejoinSetup:2732: Setting batching to 1

2019-11-02T10:26:46.730Z cpu2:66570)CMMDSNet: CMMDSNet_SetMaster:1071: Updating master node: old=none new=5ca87082-51f1-ea9d-61d3-80c16e23f318

2019-11-02T10:26:46.730Z cpu2:66570)CMMDS: CMMDSAgentlikeSetMembership:508: Setting new membership uuid 13f0b55d-5ade-1957-bdd2-80c16e23f318

2019-11-02T10:26:47.730Z cpu2:66570)CMMDS: RejoinRxMasterHeartbeat:1941: Saw self listed in master heartbeat

2019-11-02T10:26:47.731Z cpu2:66570)CMMDS: RejoinRequestSnapshotWork:742: Send a snapshot request to master successfully.

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495441]:Inserting (actDir:0):u:5ca87082-51f1-ea9d-61d3-80c16e23f318 o:00000000-0000-0000-0000-000000000000 r:0 t:NODE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495442]:Inserting (actDir:0):u:5daab33b-8291-65be-07e7-002590388072 o:00000000-0000-0000-0000-000000000000 r:0 t:NODE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495443]:Inserting (actDir:0):u:5da6c454-62b1-66c4-5fdf-00259099c4de o:00000000-0000-0000-0000-000000000000 r:0 t:NODE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495462]:Inserting (actDir:0):u:4ad6a65d-e49b-601f-834d-00259099c4de o:5da6c454-62b1-66c4-5fdf-00259099c4de r:1 t:NET_INTERFACE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495463]:Inserting (actDir:0):u:bddf645d-7ca6-2851-b56c-0025906a91da o:5cbb0ad5-5b55-b0a4-f79d-0025906a91da r:2 t:NET_INTERFACE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495464]:Inserting (actDir:0):u:0ab5645d-240a-5185-21f7-002590388072 o:5cc1867d-e247-57cd-6bcd-002590388072 r:1 t:NET_INTERFACE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495465]:Inserting (actDir:0):u:42b5645d-6841-0e96-f6f8-80c16e23f318 o:5ca87082-51f1-ea9d-61d3-80c16e23f318 r:1 t:NET_INTERFACE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495466]:Inserting (actDir:0):u:05b5645d-3413-68ce-2546-00259099c4de o:5cc05838-8052-12a0-c9aa-00259099c4de r:1 t:NET_INTERFACE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495467]:Inserting (actDir:0):u:4e9ca05d-3c94-b4f3-763f-0025906a91da o:5da02d30-a295-a5e4-3f35-0025906a91da r:1 t:NET_INTERFACE

2019-11-02T10:26:47.736Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:41: [495468]:Inserting (actDir:0):u:70bcaa5d-480f-9bf5-c2a0-002590388072 o:5daab33b-8291-65be-07e7-002590388072 r:1 t:NET_INTERFACE

2019-11-02T10:26:47.738Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:83: [496114]:Adding a new Membership entry (13f0b55d-5ade-1957-bdd2-80c16e23f318) with 4 members:

2019-11-02T10:26:47.738Z cpu2:66570)CMMDS: CMMDSUtil_PrintArenaEntry:87: [496114]:Inserting (actDir:0):u:5ca87082-51f1-ea9d-61d3-80c16e23f318 o:5ca87082-51f1-ea9d-61d3-80c16e23f318 r:7 t:SUB_CLUSTER_MEMBERSHIP

2019-11-02T10:26:47.739Z cpu2:66570)CMMDS: RejoinRxSnapshotResponse:639: Applied snapshot at master sequence number 8281258

 

Еще предположение, что началось это после апдейтов.

Когда на всех нодах был билд 13932383, все работало корректно, горя не знал.

Re: vRO 8 - Bind variable to workflow input issue

$
0
0

Can someone reproduce this issue?

 

thanks

Creating a VM from a snapshot

$
0
0

Hi all

 

I am trying to create a new VM from a snapshot , google show me how to do it using WS , I connect to vcenter , pick the master image snapshot I want to clone/create but then I get the following

" you cannot make a clone of a shared or remote Virtual machine"

 

  con some one help me please on how to do this and create a new VM from a snapshot? my vcenter is old and there is no option to clone from within vcenter(5.1) when you right click the snapshot nothing come up, guess that is why they want you to use WS

 

Thanks for all the help


Re: App Volumes 4.0 Public Beta??

Re: vsphere 6.7 LSI raid card, can't get drive status

$
0
0

Hi, yzsz.

Tell me please, wich version of LSA is work for you?

I have ESXi 6.7U3 and try many versions of LSA, but cant login to host with error "getClass: classname VMware_UserAuthorizationService not found" in syslog.

Re: usb drive won't mount

$
0
0

all solutions do not work with my USB External HDD in NTFS format. I have no problem with flash usb, or External HDD in ExFat format.

Re: Linux Can't Connect on Starbucks WiFi

$
0
0

the Player UI is installed alongside Workstation Pro.  So you can just install the Pro version, and after the 30 day trial expires, still just use the Player UI to run your VMs.

in that case, is it still necessary to copy the Virtual Network Editor components to the Player folder, and create a shortcut to the editor exe?

 

thx

Re: Silent death of vmplayer 15.5. (-14665864.x86_64) upon attempting to create virtual machine

$
0
0

*** SOLVED ***

 

Never mind.

 

I'm such a fool.

 

I hadn't enabled Intel Virtual Technology in the BIOS.

 

All cool

Viewing all 231052 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>