A hardware-software co-designed computing-in-memory system to solve the efficiency issues in graph learning. The hardware is a resistive memory array with its cells just randomly programmed at the begining, while the weights of our GNN are also randomly fixed except the last layer. The system can avoid the von Neumann bottleneck, the programming issues of resistive memory cells, and the training cost of GNNs.